Separating Words: Decoding a Paper

Gödel’s Lost Letter and P=NP 2019-09-16

A clever trick on combining automata

John Robson has worked on various problems including what is still the best result on separating words—the topic we discussed the other day. Ken first knew him for his proof than {N \times N} checkers is {\mathsf{EXPTIME}}-complete and similar hardness results for chess and Go.

Today I want to talk about his theorem that any two words can be separated by an automaton with relataivley few states.

In his famous paper from 1989, he proved an upper bound on the Separating Word Problem. This is the question: Given two strings {S} and {T}, how many states does a deterministic automaton need to be able to accept {S} and reject {T}? His theorem is:

Theorem 1 (Robson’s Theorem) Suppose that {S} and {T} are distinct strings of length {n}. Then there is an automaton with at most {O(n^{0.4}\log^{0.6} n)} states that accepts {S} and rejects {T}.

The story of his result is involved. For starters, it is still the best upper bound after almost three decades. Impressive. Another issue is that a web search does not quickly, at least for for me, find a PDF of the original paper. I tried to find it and could not. More recent papers on the separating word problem reference his 1989 paper, but they do not explain how he proves it.

Recall the problem of separating words is: Given two distinct words of length {n}, is there a deterministic finite automaton that accepts one and rejects the other? And the machine has as few states as possible. Thus his theorem shows that roughly the number of states grows at most like the square root of {n}.

I did finally track the paper down. The trouble for me is the paper is encrypted. Well not exactly, but the version I did find is a poor copy of the original. Here is an example to show what I mean:

</tr

[ An Example ]

So the task of decoding the proof is a challenge. A challenge, but a rewarding one.

A Cool Trick

Robson’s proof uses two insights. The first is he uses some basic string-ology. That is he uses some basic facts about strings. For example he uses that a non-periodic string cannot overlap itself too much.

He also uses a clever trick on how to simulate two deterministic machines for the price of one. This in general is not possible, and is related to deep questions about automata that we have discussed before here. Robson shows that it can be done in a special but important case.

Let me explain. Suppose that {\alpha} is a string. We can easily design an automaton that accepts {x} if and only if {x} is the string {\alpha}. The machine will have order the length of {\alpha} states. So far quite simple.

Now suppose that we have a string {S} of length {n} and wish to find a particular occurrence of the pattern {\alpha} in {S}. We assume that there are {\#(S,\alpha)} occurrences of {\alpha} in {S}. The task is to construct an automaton that accepts at the end of the {k^{th}} copy of {\alpha}. Robson shows that this can be done by a automaton that has order

\displaystyle  \#(S,\alpha) + |\alpha|

Here {|\alpha|} is the length of the string {\alpha}.

This is a simple, clever, and quite useful observation. Clever indeed. The obvious automaton that can do this would seem to require a cartesian product of two machines. This would imply that it would require

\displaystyle  \#(S,\alpha) \times |\alpha|

number of states: Note the times operator {\times} rather than addition. Thus Robson’s trick is a huge improvement.

Here is how he does this.

His Trick

Robson’s uses a clever trick in his proof of the main lemma. Let’s work through an example with the string {100}. The goal is to see if there is a copy of this string starting at a position that is a multiple of {3}.

The machine starts in state {(0,Y)} and tries to find the correct string {100} as input. If it does, then it reaches the accepting state {(3,Y)}. If while doing this it gets a wrong input, then it switches to states that have stopped looking for the input {100}. After seeing three inputs the machine reaches {(3,N)} and then moves back to the start state.

[ The automaton ]

The Lemmas

We will now outline the proof in some detail.

Hashing

The first lemma is a simple fact about hashing.

Lemma 2 Suppose {1 \le r \le m} and

\displaystyle  1 \le k_{1} < \cdots < k_{m} \le n.

Then all but {{O}(m \log n)} primes satisfy

\displaystyle  k_{i} \equiv k_{r} \bmod p \text{ if and only if } i =r.

Proof: Consider the quantity {|k_{r} - k_{i}|} for {i} not equal to {r}. Call a prime bad if it divides this quantity. This quantity can be divisible by at most {\log n} primes. So there are at most {{O}(m\log n)} bad primes in total. \Box

Strings

We need some definitions about strings. Let {| \alpha |} be the length of the string {\alpha}. Also let {\#(S,\alpha)} be the number of occurrences of {\alpha} in {S}.

A string {\alpha} has the period {p} provided

\displaystyle  \alpha_{i} = \alpha_{i+p},

for all {i} so that {i+p} is defined. A string {\alpha} is periodic provided it has a period {p>0} that is less than half its length. Note, the shorter the period the more the string is really “periodic”: for example, the string

\displaystyle  10101010101010

is more “periodic” than

\displaystyle  10000001000000.

Lemma 3 For any string {u} either {u0} or {u1} is not periodic.

Proof: Suppose that {\beta=u\sigma} is periodic with period {p} where {\sigma} is a single character. Let the length of {\beta} equal {l}. So by definition, {1 \le p \le l/2}. Then

\displaystyle  \beta_{i} = \beta_{i+p},

for {1 \le i \le l-p}. So it follows that

\displaystyle  \beta_{l-p} = \beta_{l} = \sigma.

This shows that {u1} and {u0} cannot both be periodic, since

\displaystyle  1 \le l-p \le l/2 < l.

\Box

Lemma 4 Suppose that {\alpha} is not a periodic string. Then the number of copies of {\alpha} in a string {S} is upper bounded by {{O}(M)} where

\displaystyle  M = \frac{|S|}{|\alpha|}.

Proof: The claim follows once we prove that no two copies of {\alpha} in {S} can overlap more than {l/2} where {l} is the length of {\alpha}. This will immediately imply the lemma.

If {\alpha} has two copies in {S} that overlap then clearly

\displaystyle  \alpha_{i} = \alpha_{i+d},

for some {d>0} and all {i} in the range {1,\dots,l-d}. This says that {\alpha} has the period {l-d}. Since {\alpha} is not periodic it follows that {d > l/2}. This implies that the overlap of the two copies of {\alpha} are at most length {l/2}. Thus we have shown that they cannot overlap too much. \Box

Main Lemma

Say an automaton finds the {k^{th}} occurrence of {\alpha} in {S} provided it enters a special state after scanning the last bit of this occurrence.

Lemma 5 Let {S} be a string of length {n} and let {\alpha} be a non-periodic string.Then, there is an automaton with at most {\widetilde{O}(M)} states that can find the {k^{th}} occurrence of {\alpha} in {S} where

\displaystyle  M = \#(S,\alpha) + |\alpha|.

Here {\widetilde{O}(M)} allows factors that are fixed powers of {\log n}. This lemma is the main insight of Robson and will be proved later.

The Main Theorem

The following is a slightly weaker version of Robson’s theorem. I am still confused a bit about his stronger theorem, to be honest.

Theorem 6 (Robson’s Theorem) Suppose that {S} and {T} are distinct strings of length {n}. Then there is an automaton with at most {\widetilde{O}(\sqrt {n})} states that accepts {S} and rejects {T}.

Proof: Since {S} and {T} are distinct we can assume that {S} starts with the prefix {u1} and {T} starts with the prefix {u0} for some string {u}. If the length of {u} is less than order {\widetilde{O}(\sqrt {n})} the theorem is trivial. Just construct an automaton that accepts {u1} and rejects {u0}.

So we can assume that {u = w\alpha} for some strings {w} and {\alpha} where the latter is order {\widetilde{O}(\sqrt {n})} in length. By lemma we can assume that {\alpha1} is not periodic. So by lemma we get that

\displaystyle  \#(S,\alpha1) = \widetilde{O}(\sqrt{n}).

Then by lemma we are done. \Box

Proof of Main Lemma

Proof: Let {S} have length {n} and let {\alpha} be a non-periodic string in {S} of length {l}. Also let {\#(S,\alpha) = m}. By the overlap lemma it follows that {m} is bounded by {\widetilde{O}(|S|/|\alpha|)}.

Let {\alpha} occur at locations

\displaystyle  1 \le k_{1} < \cdots < k_{m} \le n.

Suppose that we are to construct a machine that finds the {r^{th}} copy of {\alpha}. By the hashing lemma there is a prime {p=\widetilde{O}(m)} so that

\displaystyle  k_{i} \equiv k_{r} \bmod p

if and only if {i=r}. Note we can also assume that {p > l}.

Let’s argue the special case where {k_{r}} is {0} modulo {p}. If it is congruent to another value the same argument can be used. This follows by having the machine initially skip a fixed amount of the input and then do the same as in the congruent to {0} case.

The automaton has states {(i,Y)} and {(i,N)} for {i=0,\dots,p}. The machine starts in state {(0,Y)} and tries to get to the accepting state {(l,Y)}. The transitions include:

\displaystyle  (0,Y) \underset{\alpha_{1}}{\rightarrow} (1,Y) \underset{\alpha_{2}}{\rightarrow} (2,Y) \underset{\alpha_{3}}{\rightarrow} \cdots \underset{\alpha_{l}}{\rightarrow} (l,Y).

This means that the machine keeps checking the input to see if it is scanning a copy of {\alpha}. If it gets all the way to the accepting state {(l,Y)}, then it stops.

Further transitions are:

\displaystyle  (1,N) \rightarrow (2,N) \rightarrow \cdots \rightarrow (p,N),

and

\displaystyle  (0,Y) \underset{\neg \alpha_{1}}{\rightarrow} (1,N), (1,Y) \underset{\neg \alpha_{2}}{\rightarrow} (2,N), \dots, (l-1,Y) \underset{\neg \alpha_{l}}{\rightarrow} (l,N).

The second group means that if a wrong input happens, then {(i,Y)} moves to {(i+1,N)}. Finally, the state {(p,N)} resets and starts the search again by going to the start state {(0,Y)} with an epsilon move.

Clearly this has the required number of states and it operates correctly. \Box

Open Problems

The open problem is: Can the SWP be solved with a better bound? The lower bound is still order {\log n}. So the gap is exponential.