In search of a counterexample to the Lovász conjecture

It is a celebrated result of John Dixon, (The probability of generating the symmetric group, (subscription) Math. Z. 110 (1969), 199–205.) that if one choose two random permutations in the symmetric group S_n, uniformly (i.e. each with probability 1 / n!), and independently, the probability that the two permutations generate the whole group tends to 3/4 as n \to \infty. It is clear that this probability will never be greater than 3/4, since there is a 1/4 probability that the two permutations will both be even, in which case you could only generate, at most, the alternating group. Interestingly enough, Dixon’s paper covers this possibility, and he actually shows that the probability that two permutations generate the alternating group tends to 1/4 as n \to \infty.

Equivalently, if two random elements x, y of the alternating group A_n are chosen uniformly and randomly, the probability that they generate the group tends to 1 as n \to \infty. This leads me to my question — what is the probability that the 4-regular Cayley graph with generators x, y , x^{-1}, y^{-1} is not Hamiltonian, as n \to \infty?

Showing that this probability is bounded away from 0 would provide a counterexample for a notorious problem about vertex-transitive graphs. So we might expect that this is hard. But is it even possible that it is true, or is there some obvious reason that such graphs will tend to be Hamiltonian?

Another approach in the same spirit would be computational rather than asymptotic. Suppose we look at thousands of random Cayley graphs on the alternating groups A_5 and A_6, for example. It is straightforward to check that they are connected. Is it within reach for a cleverly designed algorithm on modern computers to conclusively rule out Hamiltonicity for a 4-regular graph on 60 or 360 vertices? I would also be happy with a computer-aided proof that the conjecture is false.

Historical note: It is called the Lovász conjecture, even though he just asked the question (and perhaps conjectured the other way). I am under the impression that some prominent people in this field have felt that the answer should be no. In particular Babai does not believe it.

Analyzing card shuffling machines

Diaconis, Fulman, and Holmes have uploaded a preprint titled, “Analysis of Casino Shelf Shuffling Machines.” The paper provides a brief overview of the venerable history of mixing time of card shuffling, all the way back to early results by Markov and Poincaré, and their main point is to analyze a model of shuffle that had not been studied previously. What I found most interesting, though, was their account of successfully convincing people in the business of making card shuffling machines that their machines weren’t adequately mixing up the cards. They gave the manufacturers one mathematical argument, based on total variation distance, that they didn’t accept, and then another argument, based on a card guessing game, that they did.

I’ll describe the card guessing game. I flip through a deck of 52 cards, one card at a time, and before I flip a card you try to guess what it will be. Let’s say you have a perfect memory for every card that’s already been flipped, so you obviously won’t guess those. On the other hand, if the cards are in a truly random order to start out, you obviously don’t have any better strategy than to guess uniformly among the remaining cards. An easy analysis gives that your best possible expected number of correct guesses is {1 \over 52} + {1 \over 51} + \dots + { 1 \over 1} \approx 4.5. On the other hand, the authors described a strategy (conjectured to be best possible) that allows one to guess an average of 9.5 cards correctly, on a totally ordered deck run through the shelf shuffling machine only once. This suggests strongly that the cards are not sufficiently random.

This analysis convinced the company to have the shelf shuffling machine make two passes through the deck, rather than one as they had initially hoped. The president of the company told them that “We are not pleased with your conclusions, but we believe them and that’s what we hired you for.”

Mathematical Zen

In 1974 Frank Harary and Ronald C. Read published a paper with the incredible title, “Is the null-graph a pointless concept?

The abstract reads as follows.

The graph with no points and no lines is discussed critically. Arguments for and against its official admittance as a graph are presented. This is accompanied by an extensive survey of the literature. Paradoxical properties of the null-graph are noted. No conclusion is reached.

A ninth planet?

John Matese and Daniel Whitmire, from the University of Louisiana at Lafayette, are claiming that data from NASA’s Wide-field Infrared Survey Explorer already suggests that there is a large planet in the outer solar system. This hypothetical planet, which they have nicknamed Tyche, orbits the sun at 15,000 AU’s and weighs in at four times the mass of Jupiter. (Apparently Matese suggested this theory as early as 1999 based on perceived statistical fluke in the orbit of comets.)

When I read this I wondered at first whether it was even conceivable, and in particular would 15000 AU’s even still be considered in our solar system? I looked it up, and it is thought that the sun’s gravitational field dominates that of other stars out to about two light-years, or 125,000 AU’s. The Oort cloud, a hypothetical cloud of a trillion comets, which Freeman Dyson has speculated to be a possible long-term home for our distant descendants, is thought to be between 50,000 and 100,000 AU’s from the sun.

It seems that the Tyche hypothesis is not widely accepted in the astronomy community, and NASA has demurred, suggesting that we will know more in coming months or years. I, for one, welcome our new giant planet overlord.

Tyche

Thanks to Dr. Heiser for the link.

Packing tetrahedra

Last spring I saw a great colloquium talk on packing regular tetrahedra in space by Jeffrey Lagarias. He pointed out that in some sense the problem goes back to Aristotle, who apparently claimed that they tile space. Since Aristotle was thought to be infallible, this was repeated throughout the ages until someone (maybe Minkowski?) noticed that they actually don’t.

John Conway and Sal Torquato considered various quantitative questions about packing, tiling, and covering, and in particular asked about the densest packing of tetrahedra in space. They optimized over a very special kind of periodic packing, and in the densest packing they found, the tetrahedra take up about 72% of space.

Compare this to the densest packing of spheres in space, which take up about 74%. If Conway and Torquato’s example was actually the densest packing of tetrahedra, it would be a counterexample to Ulam’s conjecture that the sphere is the worst case scenario for packing.

But a series of papers improving the bound followed, and as of early 2010 the record is held by Chen, Engel, and Glotzer with a packing fraction of 85.63%.

I want to advertise two attractive open problems related to this.

(1) Good upper bounds on tetrahedron packing.

At the time of the colloquium talk I saw several months ago, it seemed that despite a whole host of papers improving the lower bound on tetrahedron packing, there was no upper bound in the literature. Since then Gravel, Elser, and Kallus posted a paper on the arXiv which gives an upper bound. This is very cool, but the upper bound on density they give is something like 1- 2.6 \times 10^{-25}, so there is still a lot of room for improvement.

(2) Packing tetrahedra in a sphere.

As far as I know, even the following problem is open. Let’s make our lives easier by discretizing the problem and we simply ask how many tetrahedra we can pack in a sphere. Okay, let’s make it even easier: the edge length of each of the tetrahedra is the same as the radius of the sphere. Even easier: every one of the tetrahedra has to have one corner at the center of the sphere. Now how many tetrahedra can you pack in the sphere?

It is fairly clear that you can get 20 tetrahedra in the sphere, since the edge length of the icosahedron is just slightly longer than the radius of its circumscribed sphere. By comparing the volume of the regular tetrahedron to the volume of the sphere, we get a trivial upper bound of 35 tetrahedra. But by comparing surface area instead, we get an upper bound of 22 tetrahedra.

There is apparently a folklore conjecture that 20 tetrahedra is the right answer, so proving this comes down to ruling out 21 or 22. To rule out 21 seems like a nonlinear optimization problem in some 63-dimensional space.

I’d guess that this is within the realm of computation if someone made some clever reductions. Oleg Musin settled the question of the kissing number in 4-dimensional space in 2003. To rule out kissing number of 25 is essentially optimizing some function over a 75-dimensional space. This sounds a little bit daunting, but it is apparently much easier than Thomas Hales’s proof of the Kepler conjecture. (For a nice survey of this work, see this article by Pfender and Ziegler.)

The fundamental group of random 2-complexes

Eric Babson, Chris Hoffman, and I recently posted major revisions of our preprint, “The fundamental group of random 2-complexes” to the arXiv. This article will appear in Journal of the American Mathematical Society. This note is intended to be a high level summary of the main result, with a few words about the techniques.

The Erdős–Rényi random graph G(n,p) is the probability space on all graphs with vertex set [n] = \{ 1, 2, \dots, n \}, with edges included with probability p, independently. Frequently p = p(n) and n \to \infty, and we say that G(n,p) asymptotically almost surely (a.a.s) has property \mathcal{P} if \mbox{Pr} [ G(n,p) \in \mathcal{P} ] \to 1 as n \to \infty.

A seminal result of Erdős and Rényi is that p(n) = \log{n} / n is a sharp threshold for connectivity. In particular if p > (1+ \epsilon) \log{n} / n, then G(n,p) is a.a.s. connected, and if p < (1- \epsilon) \log{n} / n, then G(n,p) is a.a.s. disconnected.

Nathan Linial and Roy Meshulam introduced a two-dimensional analogue of G(n,p), and proved an analogue of the Erdős-Rényi theorem. Their two-dimensional analogue is as follows: let Y(n,p) denote the probability space of all 2-dimensional (abstract) simplicial complexes with vertex set [n] and edge set {[n] \choose 2} (i.e. a complete graph for the 1-skeleton), with each of the { n \choose 3} triangles included independently with probability p.

Linial and Meshulam showed that p(n) = 2 \log{n} / n is a sharp threshold for vanishing of first homology H_1(Y(n,p)). (Here the coefficients are over \mathbb{Z} / 2. This was generalized to \mathbb{Z} /p for all p by Meshulam and Wallach.) In other words, once p is much larger than 2 \log{n} / n, every (one-dimensional) cycle is the boundary of some two-dimensional subcomplex.

Babson, Hoffman, and I showed that the threshold for vanishing of \pi_1 (Y(n,p)) is much larger: up to some log terms, the threshold is p = n^{-1/2}. In other words, you must add a lot more random two-dimensional faces before every cycle is the boundary of not any just any subcomplex, but the boundary of the continuous image of a topological disk. A precise statement is as follows.

Main result Let \epsilon >0 be arbitrary but constant. If p \le n^{-1/2 - \epsilon} then \pi_1 (Y(n,p)) \neq 0, and if p \ge n^{-1/2 + \epsilon} then \pi_1 (Y(n,p)) = 0, asymptotically almost surely.

It is relatively straightforward to show that when p is much larger than n^{-1/2}, a.a.s. \pi_1 =0. Almost all of the work in the paper is showing that when p is much smaller than n^{-1/2} a.a.s. \pi_1 \neq 0. Our methods depend heavily on geometric group theory, and on the way to showing that \pi_1 is non-vanishing, we must show first that it is hyperbolic in the sense of Gromov.

Proving this involves some intermediate results which do not involve randomness at all, and which may be of independent interest in topological combinatorics. In particular, we must characterize the topology of sufficiently sparse two-dimensional simplicial complexes. The precise statement is as follows:

Theorem. If \Delta is a finite simplicial complex such that f_2 (\sigma) / f_0(\sigma) \le 1/2 for every subcomplex \sigma, then \Delta is homotopy equivalent to a wedge of circle, spheres, and projective planes.

(Here f_i denotes the number of i-dimensional faces.)

Corollary. With hypothesis as above, the fundamental group \pi_1( \Delta) is isomorphic to a free product \mathbb{Z} * \mathbb{Z} * \dots * \mathbb{Z} / 2 * \mathbb{Z}/2, for some number of \mathbb{Z}‘s and \mathbb{Z} /2‘s.

It is relatively easy to check that if p = O(n^{-1/2 - \epsilon}) then with high probability subcomplexes of Y(n,p) on a bounded number of vertices satisfy the hypothesis of this theorem. (Of course Y(n,p) itself does not, since it has f_0 = n and roughly f_2 \approx n^{5/2} as p approaches n^{-1/2}.)

But the corollary gives us that the fundamental group of small subcomplexes is hyperbolic, and then Gromov’s local-to-global principle allows us to patch these together to get that \pi_1 ( Y(n,p) ) is hyperbolic as well.
This gives a linear isoperimetric inequality on pi_1 which we can “lift” to a linear isoperimetric inequality on Y(n,p).

But if Y(n,p) is simply connected and satisfies a linear isoperimetric inequality, then that would imply that every 3-cycle is contractible using a bounded number of triangles, but this is easy to rule out with a first-moment argument.

There are a number of technical details that I am omitting here, but hopefully this at least gives the flavor of the argument.

An attractive open problem in this area is to identify the threshold t(n) for vanishing of H_1( Y(n,p), \mathbb{Z}). It is tempting to think that t(n) \approx 2 \log{n} / n, since this is the threshold for vanishing of H_1(Y(n,p), \mathbb{Z} / m) for every integer m. This argument would work for any fixed simplicial complex but the argument doesn’t apply in the limit; Meshulam and Wallach’s result holds for fixed m as n \to \infty, so in particular it does not rule out torsion in integer homology that grows with n.

As far as we know at the moment, no one has written down any improvements to the trivial bounds on t(n), that 2 \log{n} / n \le t(n) \le n^{-1/2}. Any progress on this problem will require new tools to handle torsion in random homology, and will no doubt be of interest in both geometric group theory and stochastic topology.