# Analyzing card shuffling machines

Diaconis, Fulman, and Holmes have uploaded a preprint titled, “Analysis of Casino Shelf Shuffling Machines.” The paper provides a brief overview of the venerable history of mixing time of card shuffling, all the way back to early results by Markov and Poincaré, and their main point is to analyze a model of shuffle that had not been studied previously. What I found most interesting, though, was their account of successfully convincing people in the business of making card shuffling machines that their machines weren’t adequately mixing up the cards. They gave the manufacturers one mathematical argument, based on total variation distance, that they didn’t accept, and then another argument, based on a card guessing game, that they did.

I’ll describe the card guessing game. I flip through a deck of 52 cards, one card at a time, and before I flip a card you try to guess what it will be. Let’s say you have a perfect memory for every card that’s already been flipped, so you obviously won’t guess those. On the other hand, if the cards are in a truly random order to start out, you obviously don’t have any better strategy than to guess uniformly among the remaining cards. An easy analysis gives that your best possible expected number of correct guesses is ${1 \over 52} + {1 \over 51} + \dots + { 1 \over 1} \approx 4.5$. On the other hand, the authors described a strategy (conjectured to be best possible) that allows one to guess an average of $9.5$ cards correctly, on a totally ordered deck run through the shelf shuffling machine only once. This suggests strongly that the cards are not sufficiently random.

This analysis convinced the company to have the shelf shuffling machine make two passes through the deck, rather than one as they had initially hoped. The president of the company told them that “We are not pleased with your conclusions, but we believe them and that’s what we hired you for.”