God’s number is 20

After thirty years of people thinking about it, a famous open problem about the Rubik’s Cube has finally been resolved: God’s number is 20. In other words, every Rubik’s Cube, no matter how mixed up, can be solved in 20 moves or less. A nice summary of the work can be found at: http://www.cube20.org/

This enormous calculation was completed by a team of Morley Davidson, John Dethridge, Herbert Kociemba, and Tomas Rokicki, and oh yes, 35 CPU-years of idle computer time donated by Google. The preprint should be available by early September sometime.

It should be noted that for quite a while “God’s algorithm” has been known — in other words computers could quickly solve any given position with a minimal number of moves. But even with the algorithm in hand, to get God’s number by brute force would take too long, since there are 4.3 \times 10^{19} possible positions and one would have to check that they can all be solved in 20 moves or less. To get around checking this humongous number of positions required a lot of ingenuity and not only computing muscle, and the team should be warmly congratulated.

For the rest of this note, I’d like to briefly discuss a tiny bit of the mathematics involved, and an open problem or two.

First of all we should be clear about what we mean by “20 moves” — this is in the half-turn metric (HTM), where a move consists of turning any side, by either a quarter turn or a half turn. So to mathematicians, “God’s number” is the diameter of the Cayley graph for the generating set \langle L, R, U, D, F, B, L^2, R^2, U^2, D^2, F^2, B^2 \rangle. This is the metric of choice for cubers, and probably has been for thirty years. To cubers turning a side is turning a side.

Of course a half turn is just two quarter turns, and this leads us to consider the quarter-turn metric (QTM), where we only allow turning any side a quarter turn. Of course you can still give it a half turn, but now this counts as two moves. What is the diameter of the Cayley graph for the generating set \langle L, R, U, D, F, B \rangle? This is probably a more natural generating set to mathematicians, who might resist including both x and x^2 in the list of generators for a group.

It is now known (due to a calculation by Rokicki) that 29 moves are always sufficient, and he suspects that the methods used to show that 20 suffice for HTM could probably easily bring this down to 28.

On the other hand, there is essentially only one position known that requires 26 moves in the QTM, and only two in that require 25 moves. This is a very different situation than HTM, where there are roughly 300 million positions distance 20 from solved. Extensive searches by Rokicki have failed to find any more positions requiring 25 or 26 moves. So the conjecture is that the diameter of the Cayley graph with the QTM generators is 26, but it might be a while before we know that for sure. In particular the methods applied to solve HTM apparently don’t work quite as cleanly as they do for QTM.

So this is one problem: show that God’s number for quarter-turn metric is 26.

What might be nice to see, for either metric, is a description of the geometry or topology of the Rubik’s cube. One could imagine, for example, adding higher dimensional cells to extend the Cayley graph to a polyhedral complex, and studying the topology of this complex.

Finally, the following also seems to be wide open: if one makes random moves on the cube (i.e. takes a random walk on one of the Cayley graphs), what can be said about the Markov chain mixing time? A famous result of Bayer and Diaconis is that it takes “seven shuffles” to mix up a deck of 52 cards, and it would be mathematically nice to have such an answer for the Cube. (The diameter of the Cayley graph being 20 puts an upper bound on the mixing time, but the bound obtained in this naive way is much too large to be close to the truth.)

But unlike the card shuffling, which actually has some practical application, at least for (say) casinos who want to make sure their decks are randomized, this would be purely for knowledge and enjoyment. Apparently this kind of mixing time question was once an issue for speed-solving tournaments, as they would make a sequence of random moves to mix up the cube, and one might want to know how many random moves to make. But this is no longer an issue — instead of choosing a sequence of random moves, now they have the computer choose a random position uniformly from all 4.3 \times 10^{19} positions, and then use God’s algorithm to put the cube into that position.

(Special thanks to Tom Rokicki for very helpful conversations in writing this note.)

Coloring the integers

Here is a problem I composed, which recently appeared on the Colorado Mathematical Olympiad.

If one wishes to color the integers so that every two integers that differ by a factorial get different colors, what is the fewest number of colors necessary?

I might describe a solution, as well as some related history, in a future post. But for now I’ll just say that Adam Hesterberg solved this problem at Canada/USA Mathcamp a few summers ago, claiming the $20 prize I offered almost as soon as I offered it. At the time, I suspected but still did not know the exact answer.

Although the wording of the problem strongly suggests that the answer is finite, I don’t think that this is entirely obvious.

Along those lines, here is another infinite graph with finite chromatic number.

If one wishes to color the points of the Euclidean plane so that every two points at distance one get different colors, what is the fewest number of colors necessary?

This is one of my favorite unsolved math problems, just for being geometrically appealing and apparently intractably hard. After fifty years of many people thinking about it, all that is known is that the answer is 4, 5, 6, or 7. Recent work of Shelah and Soifer suggests that the exact answer may depend on the choice of set theoretic axioms.

This inspired the following related question.

If one wishes to color the points of the Euclidean plane so that every two points at factorial distance get different colors, do finitely many colors suffice?

More generally, if S is a sequence of positive real numbers that grows quickly enough (say exponentially), and one forbids pairs points at distance s from receiving the same color, one would suspect that finitely many colors suffice. On the other hand, if S grows slowly enough (say linearly), one might expect that infinitely many colors are required.

RIP Martin Gardner

Martin Gardner passed away yesterday, at 95; here is the NYTimes obituary.

I grew up with his puzzles and mathematical recreations, as I’m sure many mathematicians of my generation (and generations before) did. I am reminiscing this evening about interesting topics that Gardner introduced me to, and the following geometric gem comes to mind. The problem was apparently first proposed by Henri Lebesgue in 1914 and is still open, so it is almost 100 years old now.

Question: Find a universal cover C of least area in the plane, meaning a set having a subset congruent to every planar set of unit diameter.

If one restricts to convex C, then such a minimum is guaranteed to exist by the Blaschke selection theorem, a compactness result for sequences of bounded convex sets. A hexagon of unit width does the trick, but surprisingly, one can take off tiny pieces and still have a universal cover.

No one has ever exhibited such a minimum with proof, although there have been a few false claims of optimal solutions. I just searched and found an amusing 1992 Geometriae Dedicata article by H. C. Hansen in which the author shaves 4 \times 10^{-11} off the then reigning world record. I am not sure if anyone has improved on this result since, but this might be a fun thing to think about; indeed, Hansen suggests in the article how computers might help solve the problem, and computers have come a long way since 1992.

Gardner’s name came up in conversation just the other day. Tom Rokicki came to talk to my Rubik’s Cube class at Stanford about his work on “God’s algorithm” — Tom set a world record recently by proving that every Rubik’s Cube, no matter how scrambled, can be solved in 22 or fewer face turns. I asked him how long it would be until we have a proof of the conjectured answer of 20. He said he thought he might have 21 in the next few months, and then said that he wanted to have it down to 20 moves by Gathering for Gardner 2012. I hope that the Gatherings for Gardner continue, and it would be nice to see Tom succeed in his goal too.

I am not sure if anyone in history has ever made more people smile and shake their head about mathematical things than Martin Gardner. His writings illuminated it’s magical and mysterious qualities. Gardner was also famous as skeptic and a debunker, like many magicians before him. The NYTimes obituary reports:

He ultimately found no reason to believe in anything religious except a human desire to avoid “deep-seated despair.” So, he said, he believed in God.

Well godspeed to you Martin, and thanks for all the wonderful writings and beautiful thoughts.

Francisco Santos disproves Hirsch conjecture

The Hirsch conjecture has been around for over 50 years. The conjecture states that a d-dimensional polytope with n facets has diameter at most n-d. (The facets of a polytope are the maximal dimensional faces (i.e. the (d-1)-dimensional faces), and saying that the diameter is at most n-d is just saying that every pair of vertices can be connected by a path of length at most n-d.)

The upper bounds on diameter are very far away from n-d; in fact no bound that is even polynomial (much less linear) in n and d is known. It was considered substantial progress when Gil Kalai got the first subexponential upper bounds. This fact led many to speculate that the conjecture is false, and apparently Victor Klee encouraged many people to try disproving it over the years.

Francisco Santos has just announced an explicit counterexample to the Hirsch conjecture, in dimension d = 43 with n =86 facets. Details to be described at the Grünbaum–Klee meeting in Seattle this summer.

To researchers on polytopes, this result may not be too surprising. Nevertheless, it is an old problem and it is nice to see it finally resolved. And there is something pleasing about finding explicit counterexamples in higher dimensional spaces. It offers a concrete way to see how bad our intuition for higher dimensions really is. Kahn and Kalai’s spectacular counterexample to “Borsuk’s conjecture” (in dimension d = 1325) comes to mind. I look forward to seeing the details of Santos’s construction.

Many Markov components

In my recent preprint, I showed that once r \ge c / n the Metropolis Markov chain on configurations of hard discs is no longer ergodic. In particular, even for reasonably small radius there are stable configurations of discs. In this note the goal is to make a more quantitative statement about the number of Markov components in particular when r > c_2 / \sqrt{n} (which is the usual range of interest in statistical mechanics.

For the main construction, start with the following configuration of 24 discs in a square. (It is not relevant to the following discussion, but it is fun to point out that this is the densest packing of 24 equal sized discs in a square.)

This configuration of 24 discs is easily seen to be stable (meaning that each disc is individually held in place by its neighboring discs and the boundary of the square), and in fact it is still stable if we delete a carefully chosen set of four discs as follows.

By stacking k^2 of these together one has stable configurations with 20 k^2 discs.

Finally one can add discs in any of the “holes” in these kinds of configurations, leading to more and more possibilities for stable configurations. For example, here is one of the {36 \choose 18} \approx 9.08 \times 10^9 possibilities for dropping discs in 18 of the 36 holes in the previous illustration.

By continuing constructions like this for larger and larger k we observe two facts.

(1) As n \to \infty the number of components can grow quite fast. To make this quantitative we expand on the example. Suppose there are 20 k^2 discs to start out, then there are 4 k^2 holes. Suppose we drop 2 k^2 discs into those holes. Then the number of discs n = 22 k^2 in total, and then the number of Markov components is at least {4 k ^2 \choose 2 k^2 } = { 2n/11 \choose n/11 }. (This is for unlabeled discs. For labeled discs, multiply this by n!.) By Stirling’s formula, this number grows exponentially fast in n (something like 1.134^n).

In this particular example the discs have total area \pi r^2 n = \approx 0.67, but that could obviously adjusted by changing the ratio of extra discs to holes.

(2) For every \lambda \in (0.60, 0.75) there is an \alpha = \alpha(\lambda) > 1 and a sequence of configurations C_1, C_2, \ldots with C_i \in Config(i, r(i)), the number of Markov components greater than \alpha ^ i for sufficiently large i, and \pi r^2 n \to \lambda as i \to \infty.

It turns out that what are described here are only separate components in the sense that a Markov chain that only moves one disc at a time can not mix from one state to the other. (The four discs in the center of a square can rotate about the center of the square!) But a small modification of this construction would seem to give exponentially many components in the sense of \pi_0, each with small positive measure.

Is exponentially many components the maximal possible? The Milnor-Thom Theorem gives an upper bound much larger than this, something like O(e^{c n^2}) for some c > 0. It would also be very interesting to know any bounds on the topological complexity of these path components.

A conjecture concerning random cubical complexes

Nati Linial and Roy Meshulam defined a certain kind of random two-dimensional simplicial complex, and found the threshold for vanishing of homology. Their theorem is in some sense a perfect homological analogue of the classical Erdős–Rényi characterization of the threshold for connectivity of the random graph.

Linial and Meshulam’s definition was as follows. Y(n,p) is a complete graph on n vertices, with each of the {n \choose 3} triangular faces inserted independently with probability p, which may depend on n. We say that Y(n,p) almost always surely (a.a.s) has property \mathcal{P} if the probability that Y(n,p) \in \mathcal{P} tends to one as n \to \infty.

Nati Linial and Roy Meshulam showed that if \omega is any function that tends to infinity with n and if p = (\log{n} + \omega) / n then a.a.s H_1( Y(n,p) , \mathbb{Z} / 2) =0 , and if p = (\log{n} - \omega) / n then a.a.s H_1( Y(n,p) , \mathbb{Z} / 2)  \neq 0 .

(This result was later extended to arbitrary finite field coefficients and arbitrary dimension by Meshulam and Wallach. It may also be worth noting for the topologically inclined reader that their argument is actually a cohomological one, but in this setting universal coefficients gives us that homology and cohomology are isomorphic vector spaces.)

Eric Babson, Chris Hoffman, and I found the threshold for vanishing of the fundamental group \pi_1(Y(n,p)) to be quite different. In particular, we showed that if \epsilon > 0 is any constant and p \le n^{-1/2 -\epsilon} then a.a.s. \pi_1 ( Y(n,p) ) \neq 0 and if p \ge n^{ -1/2 + \epsilon} then a.a.s. \pi_1 ( Y(n,p) )  = 0 . The harder direction is to show that on the left side of the threshold that the fundamental group is nontrivial, and this uses Gromov’s ideas of negative curvature. In particular to show that the \pi_1 is nontrivial we have to show first that it is a hyperbolic group.

[I want to advertise one of my favorite open problems in this area: as far as I know, nothing is known about the threshold for H_1( Y(n,p) , \mathbb{Z}) , other than what is implied by the above results.]

I was thinking recently about a cubical analogue of the Linial-Meshulam set up. Define Z(n,p) to be the one-skeleton of the n-dimensional cube with each square two-dimensional face inserted independently with probability p. This should be the cubical analogue of the Linial-Mesulam model? So what are the thresholds for the vanishing of H_1 ( Z(n,p) , \mathbb{Z} / 2) and \pi_1 ( Z(n,p) ) ?

I just did some “back of the envelope” calculations which surprised me. It looks like p must be much larger (in particular bounded away from zero) before either homology or homotopy is killed. Here is what I think probably happens. For the sake of simplicity assume here that p is constant, although in realty there are o(1) terms that I am suppressing.

(1) If p  < \log{2} then a.a.s H_1 ( Z(n,p) , \mathbb{Z} /2 ) \neq 0, and if p  > \log{2} then a.a.s H_1 ( Z(n,p) , \mathbb{Z} /2 ) = 0.

(2) If p  < (\log{2})^{1/4} then a.a.s. \pi_1 ( Z(n,p) ) \neq 0, and if p > (\log{2})^{1/4} then a.a.s. \pi_1 ( Z(n,p) ) = 0.

Perhaps in a future post I can explain where the numbers \log{2} \approx 0.69315 and (\log{2})^{1/4} \approx 0.91244 come from. Or in the meantime, I would be grateful for any corroborating computations or counterexamples.