## How Dr. Seuss would prove the halting problem undecidable

Tim Finin, 11:12am 19 January 2008I just discovered (via del.icio.us/polular) an extraordinary proof of the halting problem by linguist Geoffrey Pullum, now at the University of Edinburgh. What’s unusual about it is that it’s written as a poem in the style of Dr. Seuss.

Geoffrey K. Pullum, Scooping the loop snooper: An elementary proof of the undecidability of the halting problem. Mathematics Magazine 73.4 (October 2000), 319-320.

It’s a marvelous proof, sure to liven up any undergraduate theory of computation class. But I noticed errors in the proof — not logical errors, but a transcriptional ones in the form of a mangled word, perhaps introduced by an OCR system. The third line of the fifth stanza reads *“that would take and program and call P (of course!)”* which has problems in syntax, semantics, rhythm and meter. I’d guess it should be *“that would take any program and call P (of course!)”*. Similarly,

*“the”*in the third line in the third stanza should probably be

*“they”*. Most of the online version I found had these errors, but I eventually found what I take to be a correct version on the QED blog. I’ve not been able to get to the original version in Mathematical Magazine to verify the corrected version which I include below.

**Scooping the Loop Snooper**

Geoffrey K. Pullum, University of Edinburgh

No program can say what another will do.

Now, I won’t just assert that, I’ll prove it to you:

I will prove that although you might work til you drop,

you can’t predict whether a program will stop.

Imagine we have a procedure called P

that will snoop in the source code of programs to see

there aren’t infinite loops that go round and around;

and P prints the word “Fine!” if no looping is found.

You feed in your code, and the input it needs,

and then P takes them both and it studies and reads

and computes whether things will all end as they should

(as opposed to going loopy the way that they could).

Well, the truth is that P cannot possibly be,

because if you wrote it and gave it to me,

I could use it to set up a logical bind

that would shatter your reason and scramble your mind.

Here’s the trick I would use – and it’s simple to do.

I’d define a procedure – we’ll name the thing Q –

that would take any program and call P (of course!)

to tell if it looped, by reading the source;

And if so, Q would simply print “Loop!” and then stop;

but if no, Q would go right back to the top,

and start off again, looping endlessly back,

til the universe dies and is frozen and black.

And this program called Q wouldn’t stay on the shelf;

I would run it, and (fiendishly) feed it *itself*.

What behaviour results when I do this with Q?

When it reads its own source, just what will it do?

If P warns of loops, Q will print “Loop!” and quit;

yet P is supposed to speak truly of it.

So if Q’s going to quit, then P should say, “Fine!” –

which will make Q go back to its very first line!

No matter what P would have done, Q will scoop it:

Q uses P’s output to make P look stupid.

If P gets things right then it lies in its tooth;

and if it speaks falsely, it’s telling the truth!

I’ve created a paradox, neat as can be –

and simply by using your putative P.

When you assumed P you stepped into a snare;

Your assumptions have led you right into my lair.

So, how to escape from this logical mess?

I don’t have to tell you; I’m sure you can guess.

By *reductio*, there cannot possibly be

a procedure that acts like the mythical P.

You can never discover mechanical means

for predicting the acts of computing machines.

It’s something that cannot be done. So we users

must find our own bugs; our computers are losers!