A ninth planet?

John Matese and Daniel Whitmire, from the University of Louisiana at Lafayette, are claiming that data from NASA’s Wide-field Infrared Survey Explorer already suggests that there is a large planet in the outer solar system. This hypothetical planet, which they have nicknamed Tyche, orbits the sun at 15,000 AU’s and weighs in at four times the mass of Jupiter. (Apparently Matese suggested this theory as early as 1999 based on perceived statistical fluke in the orbit of comets.)

When I read this I wondered at first whether it was even conceivable, and in particular would 15000 AU’s even still be considered in our solar system? I looked it up, and it is thought that the sun’s gravitational field dominates that of other stars out to about two light-years, or 125,000 AU’s. The Oort cloud, a hypothetical cloud of a trillion comets, which Freeman Dyson has speculated to be a possible long-term home for our distant descendants, is thought to be between 50,000 and 100,000 AU’s from the sun.

It seems that the Tyche hypothesis is not widely accepted in the astronomy community, and NASA has demurred, suggesting that we will know more in coming months or years. I, for one, welcome our new giant planet overlord.

Tyche

Thanks to Dr. Heiser for the link.

Packing tetrahedra

Last spring I saw a great colloquium talk on packing regular tetrahedra in space by Jeffrey Lagarias. He pointed out that in some sense the problem goes back to Aristotle, who apparently claimed that they tile space. Since Aristotle was thought to be infallible, this was repeated throughout the ages until someone (maybe Minkowski?) noticed that they actually don’t.

John Conway and Sal Torquato considered various quantitative questions about packing, tiling, and covering, and in particular asked about the densest packing of tetrahedra in space. They optimized over a very special kind of periodic packing, and in the densest packing they found, the tetrahedra take up about 72% of space.

Compare this to the densest packing of spheres in space, which take up about 74%. If Conway and Torquato’s example was actually the densest packing of tetrahedra, it would be a counterexample to Ulam’s conjecture that the sphere is the worst case scenario for packing.

But a series of papers improving the bound followed, and as of early 2010 the record is held by Chen, Engel, and Glotzer with a packing fraction of 85.63%.

I want to advertise two attractive open problems related to this.

(1) Good upper bounds on tetrahedron packing.

At the time of the colloquium talk I saw several months ago, it seemed that despite a whole host of papers improving the lower bound on tetrahedron packing, there was no upper bound in the literature. Since then Gravel, Elser, and Kallus posted a paper on the arXiv which gives an upper bound. This is very cool, but the upper bound on density they give is something like 1- 2.6 \times 10^{-25}, so there is still a lot of room for improvement.

(2) Packing tetrahedra in a sphere.

As far as I know, even the following problem is open. Let’s make our lives easier by discretizing the problem and we simply ask how many tetrahedra we can pack in a sphere. Okay, let’s make it even easier: the edge length of each of the tetrahedra is the same as the radius of the sphere. Even easier: every one of the tetrahedra has to have one corner at the center of the sphere. Now how many tetrahedra can you pack in the sphere?

It is fairly clear that you can get 20 tetrahedra in the sphere, since the edge length of the icosahedron is just slightly longer than the radius of its circumscribed sphere. By comparing the volume of the regular tetrahedron to the volume of the sphere, we get a trivial upper bound of 35 tetrahedra. But by comparing surface area instead, we get an upper bound of 22 tetrahedra.

There is apparently a folklore conjecture that 20 tetrahedra is the right answer, so proving this comes down to ruling out 21 or 22. To rule out 21 seems like a nonlinear optimization problem in some 63-dimensional space.

I’d guess that this is within the realm of computation if someone made some clever reductions. Oleg Musin settled the question of the kissing number in 4-dimensional space in 2003. To rule out kissing number of 25 is essentially optimizing some function over a 75-dimensional space. This sounds a little bit daunting, but it is apparently much easier than Thomas Hales’s proof of the Kepler conjecture. (For a nice survey of this work, see this article by Pfender and Ziegler.)

The fundamental group of random 2-complexes

Eric Babson, Chris Hoffman, and I recently posted major revisions of our preprint, “The fundamental group of random 2-complexes” to the arXiv. This article will appear in Journal of the American Mathematical Society. This note is intended to be a high level summary of the main result, with a few words about the techniques.

The Erdős–Rényi random graph G(n,p) is the probability space on all graphs with vertex set [n] = \{ 1, 2, \dots, n \}, with edges included with probability p, independently. Frequently p = p(n) and n \to \infty, and we say that G(n,p) asymptotically almost surely (a.a.s) has property \mathcal{P} if \mbox{Pr} [ G(n,p) \in \mathcal{P} ] \to 1 as n \to \infty.

A seminal result of Erdős and Rényi is that p(n) = \log{n} / n is a sharp threshold for connectivity. In particular if p > (1+ \epsilon) \log{n} / n, then G(n,p) is a.a.s. connected, and if p < (1- \epsilon) \log{n} / n, then G(n,p) is a.a.s. disconnected.

Nathan Linial and Roy Meshulam introduced a two-dimensional analogue of G(n,p), and proved an analogue of the Erdős-Rényi theorem. Their two-dimensional analogue is as follows: let Y(n,p) denote the probability space of all 2-dimensional (abstract) simplicial complexes with vertex set [n] and edge set {[n] \choose 2} (i.e. a complete graph for the 1-skeleton), with each of the { n \choose 3} triangles included independently with probability p.

Linial and Meshulam showed that p(n) = 2 \log{n} / n is a sharp threshold for vanishing of first homology H_1(Y(n,p)). (Here the coefficients are over \mathbb{Z} / 2. This was generalized to \mathbb{Z} /p for all p by Meshulam and Wallach.) In other words, once p is much larger than 2 \log{n} / n, every (one-dimensional) cycle is the boundary of some two-dimensional subcomplex.

Babson, Hoffman, and I showed that the threshold for vanishing of \pi_1 (Y(n,p)) is much larger: up to some log terms, the threshold is p = n^{-1/2}. In other words, you must add a lot more random two-dimensional faces before every cycle is the boundary of not any just any subcomplex, but the boundary of the continuous image of a topological disk. A precise statement is as follows.

Main result Let \epsilon >0 be arbitrary but constant. If p \le n^{-1/2 - \epsilon} then \pi_1 (Y(n,p)) \neq 0, and if p \ge n^{-1/2 + \epsilon} then \pi_1 (Y(n,p)) = 0, asymptotically almost surely.

It is relatively straightforward to show that when p is much larger than n^{-1/2}, a.a.s. \pi_1 =0. Almost all of the work in the paper is showing that when p is much smaller than n^{-1/2} a.a.s. \pi_1 \neq 0. Our methods depend heavily on geometric group theory, and on the way to showing that \pi_1 is non-vanishing, we must show first that it is hyperbolic in the sense of Gromov.

Proving this involves some intermediate results which do not involve randomness at all, and which may be of independent interest in topological combinatorics. In particular, we must characterize the topology of sufficiently sparse two-dimensional simplicial complexes. The precise statement is as follows:

Theorem. If \Delta is a finite simplicial complex such that f_2 (\sigma) / f_0(\sigma) \le 1/2 for every subcomplex \sigma, then \Delta is homotopy equivalent to a wedge of circle, spheres, and projective planes.

(Here f_i denotes the number of i-dimensional faces.)

Corollary. With hypothesis as above, the fundamental group \pi_1( \Delta) is isomorphic to a free product \mathbb{Z} * \mathbb{Z} * \dots * \mathbb{Z} / 2 * \mathbb{Z}/2, for some number of \mathbb{Z}‘s and \mathbb{Z} /2‘s.

It is relatively easy to check that if p = O(n^{-1/2 - \epsilon}) then with high probability subcomplexes of Y(n,p) on a bounded number of vertices satisfy the hypothesis of this theorem. (Of course Y(n,p) itself does not, since it has f_0 = n and roughly f_2 \approx n^{5/2} as p approaches n^{-1/2}.)

But the corollary gives us that the fundamental group of small subcomplexes is hyperbolic, and then Gromov’s local-to-global principle allows us to patch these together to get that \pi_1 ( Y(n,p) ) is hyperbolic as well.
This gives a linear isoperimetric inequality on pi_1 which we can “lift” to a linear isoperimetric inequality on Y(n,p).

But if Y(n,p) is simply connected and satisfies a linear isoperimetric inequality, then that would imply that every 3-cycle is contractible using a bounded number of triangles, but this is easy to rule out with a first-moment argument.

There are a number of technical details that I am omitting here, but hopefully this at least gives the flavor of the argument.

An attractive open problem in this area is to identify the threshold t(n) for vanishing of H_1( Y(n,p), \mathbb{Z}). It is tempting to think that t(n) \approx 2 \log{n} / n, since this is the threshold for vanishing of H_1(Y(n,p), \mathbb{Z} / m) for every integer m. This argument would work for any fixed simplicial complex but the argument doesn’t apply in the limit; Meshulam and Wallach’s result holds for fixed m as n \to \infty, so in particular it does not rule out torsion in integer homology that grows with n.

As far as we know at the moment, no one has written down any improvements to the trivial bounds on t(n), that 2 \log{n} / n \le t(n) \le n^{-1/2}. Any progress on this problem will require new tools to handle torsion in random homology, and will no doubt be of interest in both geometric group theory and stochastic topology.

God’s number is 20

After thirty years of people thinking about it, a famous open problem about the Rubik’s Cube has finally been resolved: God’s number is 20. In other words, every Rubik’s Cube, no matter how mixed up, can be solved in 20 moves or less. A nice summary of the work can be found at: http://www.cube20.org/

This enormous calculation was completed by a team of Morley Davidson, John Dethridge, Herbert Kociemba, and Tomas Rokicki, and oh yes, 35 CPU-years of idle computer time donated by Google. The preprint should be available by early September sometime.

It should be noted that for quite a while “God’s algorithm” has been known — in other words computers could quickly solve any given position with a minimal number of moves. But even with the algorithm in hand, to get God’s number by brute force would take too long, since there are 4.3 \times 10^{19} possible positions and one would have to check that they can all be solved in 20 moves or less. To get around checking this humongous number of positions required a lot of ingenuity and not only computing muscle, and the team should be warmly congratulated.

For the rest of this note, I’d like to briefly discuss a tiny bit of the mathematics involved, and an open problem or two.

First of all we should be clear about what we mean by “20 moves” — this is in the half-turn metric (HTM), where a move consists of turning any side, by either a quarter turn or a half turn. So to mathematicians, “God’s number” is the diameter of the Cayley graph for the generating set \langle L, R, U, D, F, B, L^2, R^2, U^2, D^2, F^2, B^2 \rangle. This is the metric of choice for cubers, and probably has been for thirty years. To cubers turning a side is turning a side.

Of course a half turn is just two quarter turns, and this leads us to consider the quarter-turn metric (QTM), where we only allow turning any side a quarter turn. Of course you can still give it a half turn, but now this counts as two moves. What is the diameter of the Cayley graph for the generating set \langle L, R, U, D, F, B \rangle? This is probably a more natural generating set to mathematicians, who might resist including both x and x^2 in the list of generators for a group.

It is now known (due to a calculation by Rokicki) that 29 moves are always sufficient, and he suspects that the methods used to show that 20 suffice for HTM could probably easily bring this down to 28.

On the other hand, there is essentially only one position known that requires 26 moves in the QTM, and only two in that require 25 moves. This is a very different situation than HTM, where there are roughly 300 million positions distance 20 from solved. Extensive searches by Rokicki have failed to find any more positions requiring 25 or 26 moves. So the conjecture is that the diameter of the Cayley graph with the QTM generators is 26, but it might be a while before we know that for sure. In particular the methods applied to solve HTM apparently don’t work quite as cleanly as they do for QTM.

So this is one problem: show that God’s number for quarter-turn metric is 26.

What might be nice to see, for either metric, is a description of the geometry or topology of the Rubik’s cube. One could imagine, for example, adding higher dimensional cells to extend the Cayley graph to a polyhedral complex, and studying the topology of this complex.

Finally, the following also seems to be wide open: if one makes random moves on the cube (i.e. takes a random walk on one of the Cayley graphs), what can be said about the Markov chain mixing time? A famous result of Bayer and Diaconis is that it takes “seven shuffles” to mix up a deck of 52 cards, and it would be mathematically nice to have such an answer for the Cube. (The diameter of the Cayley graph being 20 puts an upper bound on the mixing time, but the bound obtained in this naive way is much too large to be close to the truth.)

But unlike the card shuffling, which actually has some practical application, at least for (say) casinos who want to make sure their decks are randomized, this would be purely for knowledge and enjoyment. Apparently this kind of mixing time question was once an issue for speed-solving tournaments, as they would make a sequence of random moves to mix up the cube, and one might want to know how many random moves to make. But this is no longer an issue — instead of choosing a sequence of random moves, now they have the computer choose a random position uniformly from all 4.3 \times 10^{19} positions, and then use God’s algorithm to put the cube into that position.

(Special thanks to Tom Rokicki for very helpful conversations in writing this note.)

Coloring the integers

Here is a problem I composed, which recently appeared on the Colorado Mathematical Olympiad.

If one wishes to color the integers so that every two integers that differ by a factorial get different colors, what is the fewest number of colors necessary?

I might describe a solution, as well as some related history, in a future post. But for now I’ll just say that Adam Hesterberg solved this problem at Canada/USA Mathcamp a few summers ago, claiming the $20 prize I offered almost as soon as I offered it. At the time, I suspected but still did not know the exact answer.

Although the wording of the problem strongly suggests that the answer is finite, I don’t think that this is entirely obvious.

Along those lines, here is another infinite graph with finite chromatic number.

If one wishes to color the points of the Euclidean plane so that every two points at distance one get different colors, what is the fewest number of colors necessary?

This is one of my favorite unsolved math problems, just for being geometrically appealing and apparently intractably hard. After fifty years of many people thinking about it, all that is known is that the answer is 4, 5, 6, or 7. Recent work of Shelah and Soifer suggests that the exact answer may depend on the choice of set theoretic axioms.

This inspired the following related question.

If one wishes to color the points of the Euclidean plane so that every two points at factorial distance get different colors, do finitely many colors suffice?

More generally, if S is a sequence of positive real numbers that grows quickly enough (say exponentially), and one forbids pairs points at distance s from receiving the same color, one would suspect that finitely many colors suffice. On the other hand, if S grows slowly enough (say linearly), one might expect that infinitely many colors are required.

RIP Martin Gardner

Martin Gardner passed away yesterday, at 95; here is the NYTimes obituary.

I grew up with his puzzles and mathematical recreations, as I’m sure many mathematicians of my generation (and generations before) did. I am reminiscing this evening about interesting topics that Gardner introduced me to, and the following geometric gem comes to mind. The problem was apparently first proposed by Henri Lebesgue in 1914 and is still open, so it is almost 100 years old now.

Question: Find a universal cover C of least area in the plane, meaning a set having a subset congruent to every planar set of unit diameter.

If one restricts to convex C, then such a minimum is guaranteed to exist by the Blaschke selection theorem, a compactness result for sequences of bounded convex sets. A hexagon of unit width does the trick, but surprisingly, one can take off tiny pieces and still have a universal cover.

No one has ever exhibited such a minimum with proof, although there have been a few false claims of optimal solutions. I just searched and found an amusing 1992 Geometriae Dedicata article by H. C. Hansen in which the author shaves 4 \times 10^{-11} off the then reigning world record. I am not sure if anyone has improved on this result since, but this might be a fun thing to think about; indeed, Hansen suggests in the article how computers might help solve the problem, and computers have come a long way since 1992.

Gardner’s name came up in conversation just the other day. Tom Rokicki came to talk to my Rubik’s Cube class at Stanford about his work on “God’s algorithm” — Tom set a world record recently by proving that every Rubik’s Cube, no matter how scrambled, can be solved in 22 or fewer face turns. I asked him how long it would be until we have a proof of the conjectured answer of 20. He said he thought he might have 21 in the next few months, and then said that he wanted to have it down to 20 moves by Gathering for Gardner 2012. I hope that the Gatherings for Gardner continue, and it would be nice to see Tom succeed in his goal too.

I am not sure if anyone in history has ever made more people smile and shake their head about mathematical things than Martin Gardner. His writings illuminated it’s magical and mysterious qualities. Gardner was also famous as skeptic and a debunker, like many magicians before him. The NYTimes obituary reports:

He ultimately found no reason to believe in anything religious except a human desire to avoid “deep-seated despair.” So, he said, he believed in God.

Well godspeed to you Martin, and thanks for all the wonderful writings and beautiful thoughts.

Francisco Santos disproves Hirsch conjecture

The Hirsch conjecture has been around for over 50 years. The conjecture states that a d-dimensional polytope with n facets has diameter at most n-d. (The facets of a polytope are the maximal dimensional faces (i.e. the (d-1)-dimensional faces), and saying that the diameter is at most n-d is just saying that every pair of vertices can be connected by a path of length at most n-d.)

The upper bounds on diameter are very far away from n-d; in fact no bound that is even polynomial (much less linear) in n and d is known. It was considered substantial progress when Gil Kalai got the first subexponential upper bounds. This fact led many to speculate that the conjecture is false, and apparently Victor Klee encouraged many people to try disproving it over the years.

Francisco Santos has just announced an explicit counterexample to the Hirsch conjecture, in dimension d = 43 with n =86 facets. Details to be described at the Grünbaum–Klee meeting in Seattle this summer.

To researchers on polytopes, this result may not be too surprising. Nevertheless, it is an old problem and it is nice to see it finally resolved. And there is something pleasing about finding explicit counterexamples in higher dimensional spaces. It offers a concrete way to see how bad our intuition for higher dimensions really is. Kahn and Kalai’s spectacular counterexample to “Borsuk’s conjecture” (in dimension d = 1325) comes to mind. I look forward to seeing the details of Santos’s construction.

Many Markov components

In my recent preprint, I showed that once r \ge c / n the Metropolis Markov chain on configurations of hard discs is no longer ergodic. In particular, even for reasonably small radius there are stable configurations of discs. In this note the goal is to make a more quantitative statement about the number of Markov components in particular when r > c_2 / \sqrt{n} (which is the usual range of interest in statistical mechanics.

For the main construction, start with the following configuration of 24 discs in a square. (It is not relevant to the following discussion, but it is fun to point out that this is the densest packing of 24 equal sized discs in a square.)

This configuration of 24 discs is easily seen to be stable (meaning that each disc is individually held in place by its neighboring discs and the boundary of the square), and in fact it is still stable if we delete a carefully chosen set of four discs as follows.

By stacking k^2 of these together one has stable configurations with 20 k^2 discs.

Finally one can add discs in any of the “holes” in these kinds of configurations, leading to more and more possibilities for stable configurations. For example, here is one of the {36 \choose 18} \approx 9.08 \times 10^9 possibilities for dropping discs in 18 of the 36 holes in the previous illustration.

By continuing constructions like this for larger and larger k we observe two facts.

(1) As n \to \infty the number of components can grow quite fast. To make this quantitative we expand on the example. Suppose there are 20 k^2 discs to start out, then there are 4 k^2 holes. Suppose we drop 2 k^2 discs into those holes. Then the number of discs n = 22 k^2 in total, and then the number of Markov components is at least {4 k ^2 \choose 2 k^2 } = { 2n/11 \choose n/11 }. (This is for unlabeled discs. For labeled discs, multiply this by n!.) By Stirling’s formula, this number grows exponentially fast in n (something like 1.134^n).

In this particular example the discs have total area \pi r^2 n = \approx 0.67, but that could obviously adjusted by changing the ratio of extra discs to holes.

(2) For every \lambda \in (0.60, 0.75) there is an \alpha = \alpha(\lambda) > 1 and a sequence of configurations C_1, C_2, \ldots with C_i \in Config(i, r(i)), the number of Markov components greater than \alpha ^ i for sufficiently large i, and \pi r^2 n \to \lambda as i \to \infty.

It turns out that what are described here are only separate components in the sense that a Markov chain that only moves one disc at a time can not mix from one state to the other. (The four discs in the center of a square can rotate about the center of the square!) But a small modification of this construction would seem to give exponentially many components in the sense of \pi_0, each with small positive measure.

Is exponentially many components the maximal possible? The Milnor-Thom Theorem gives an upper bound much larger than this, something like O(e^{c n^2}) for some c > 0. It would also be very interesting to know any bounds on the topological complexity of these path components.

A conjecture concerning random cubical complexes

Nati Linial and Roy Meshulam defined a certain kind of random two-dimensional simplicial complex, and found the threshold for vanishing of homology. Their theorem is in some sense a perfect homological analogue of the classical Erdős–Rényi characterization of the threshold for connectivity of the random graph.

Linial and Meshulam’s definition was as follows. Y(n,p) is a complete graph on n vertices, with each of the {n \choose 3} triangular faces inserted independently with probability p, which may depend on n. We say that Y(n,p) almost always surely (a.a.s) has property \mathcal{P} if the probability that Y(n,p) \in \mathcal{P} tends to one as n \to \infty.

Nati Linial and Roy Meshulam showed that if \omega is any function that tends to infinity with n and if p = (\log{n} + \omega) / n then a.a.s H_1( Y(n,p) , \mathbb{Z} / 2) =0 , and if p = (\log{n} - \omega) / n then a.a.s H_1( Y(n,p) , \mathbb{Z} / 2)  \neq 0 .

(This result was later extended to arbitrary finite field coefficients and arbitrary dimension by Meshulam and Wallach. It may also be worth noting for the topologically inclined reader that their argument is actually a cohomological one, but in this setting universal coefficients gives us that homology and cohomology are isomorphic vector spaces.)

Eric Babson, Chris Hoffman, and I found the threshold for vanishing of the fundamental group \pi_1(Y(n,p)) to be quite different. In particular, we showed that if \epsilon > 0 is any constant and p \le n^{-1/2 -\epsilon} then a.a.s. \pi_1 ( Y(n,p) ) \neq 0 and if p \ge n^{ -1/2 + \epsilon} then a.a.s. \pi_1 ( Y(n,p) )  = 0 . The harder direction is to show that on the left side of the threshold that the fundamental group is nontrivial, and this uses Gromov’s ideas of negative curvature. In particular to show that the \pi_1 is nontrivial we have to show first that it is a hyperbolic group.

[I want to advertise one of my favorite open problems in this area: as far as I know, nothing is known about the threshold for H_1( Y(n,p) , \mathbb{Z}) , other than what is implied by the above results.]

I was thinking recently about a cubical analogue of the Linial-Meshulam set up. Define Z(n,p) to be the one-skeleton of the n-dimensional cube with each square two-dimensional face inserted independently with probability p. This should be the cubical analogue of the Linial-Mesulam model? So what are the thresholds for the vanishing of H_1 ( Z(n,p) , \mathbb{Z} / 2) and \pi_1 ( Z(n,p) ) ?

I just did some “back of the envelope” calculations which surprised me. It looks like p must be much larger (in particular bounded away from zero) before either homology or homotopy is killed. Here is what I think probably happens. For the sake of simplicity assume here that p is constant, although in realty there are o(1) terms that I am suppressing.

(1) If p  < \log{2} then a.a.s H_1 ( Z(n,p) , \mathbb{Z} /2 ) \neq 0, and if p  > \log{2} then a.a.s H_1 ( Z(n,p) , \mathbb{Z} /2 ) = 0.

(2) If p  < (\log{2})^{1/4} then a.a.s. \pi_1 ( Z(n,p) ) \neq 0, and if p > (\log{2})^{1/4} then a.a.s. \pi_1 ( Z(n,p) ) = 0.

Perhaps in a future post I can explain where the numbers \log{2} \approx 0.69315 and (\log{2})^{1/4} \approx 0.91244 come from. Or in the meantime, I would be grateful for any corroborating computations or counterexamples.