Previous Page

Table of Contents [ Show]


My Problem

When a human deals cards, we go through a process of shuffling and dealing to players in sequence, just to have the players re-sort the cards. We don't think about how inefficient this is because it is essential to fair, random dealing. At first pass I did just this. I wrote a shuffle algorithm that picked from an array of 0-51 at random, then moved all numbers larger than the picked card left by one in the array. Because the higher numbers were all pushed left they would overwrite the selection so that it couldn't be picked twice, and leave a stray copy of the highest number at the end. It ran 52 times, generating a random selection from a range that shrank once per run to leave out the duplicate at the end. Plugging the selected number on each run into a seprate array at the position equivalent to the run number yeilds a shuffled array containing each number between 0-51 once. Dealing was then as simple as putting every forth card into an array for each player. But this leaves you with 4 shuffled arrays and is what eventually placed my palm firmly on my face.

Before I got to that point, however, I had already written a quick bubble sort. For those who don't know, bubble sorts are really easy but extremely inefficient. This is because a bubble sort requires that you compare each pair of side by side items in an array, swapping them if they are out of order. Because any item can only move up to one position in sequence each time this is done and it is possible that the highest or lowest item may be on the exact opposite side, this will have to be repeated as many times as the list is long, less one. This means a maximum of n less one comparisons per run, times n less one runs, or (n-1)^2 comparisons where n is the number of items in the set. Luckily 4 sets of 13 is a very managable 574 comparisons and would have been handled in a tiny fraction of a second.

The Real Problem

While setting up code to pass each players hand to the bubble sort function, I started thinking about how silly it was to have to sort cards immediately after unsorting them. It screamed redundancy. It was only then that I realized that the point of dealing was really only about random distribution, and has nothing to do with ordering except to keep humans from cheating. It would make way more sense to deal the already sorted deck into piles at random until each hand was full.

This isn't difficult task, nor is it an illogical solution to leap to, it is just not the way people that people behave. Imagine having a deck of cards in assending order. If I started dealing into four piles I could easily make sure that I deal myself every forth card to guarantee a straight flush; the last 13 cards to guarantee the highest cards; the last 4 cards to guarantee 4 aces; etc. etc. Humans go through multiple levels of inefficiency in order to ensure that they don't know which cards are landing in which pile. You're computer is not invested in you winning, so it can truly distribute the cards at random without shuffling them in the slightest.

Thinking Like a Computer

Upon this realization it becomes a really simple algorithm:

  1. Start counting the deck from 0 to 51, and set a counter for each hand starting at 0.
  2. Select a random player from 0 to 4
  3. If the player's hand is full (ie. their counter has reached 13), then repeat the previous step.
  4. Place the current value of the deck counter into the position of the counter for the selected player.
  5. Increment the counter for that player and the deck.
  6. Repeat from step 2.

Efficiencies

If the hand is full in step 3, there is is nothing stopping it from trying the same hand again when it returns to step 2, but the process is so quick that it will eventually find a free hand much quicker than the time the shuffle and bubble sort took. I did a bunch of stats to find out how many wasted cycles this would use but I won't get into that here. The point is that the first hand doesn't fill up, on average, until around the mid-fourties. I have a gist that does an equivalent calculation. The average number of attempts to deal to an already-full hand is a bit less than 20 per deal. It would require many more cycles to first check to see if the hands are full than it does to just fail and retry. This does conflict with my desire to find creative and elegant solutions, however.

This method is already faster than just the process of sorting and deal, but this also means that it doesn't then have to sort the final hands either. Because the cards were delt in ascending order and the slots within the hand were also filled in asceding order, the resulting hands also end up in ascending order automatically.

I will note that this solution is perhaps only the most efficient in games where the whole deck is dealt. If only a partial deck is dealt then suddenly the previous method becomes more efficient. No matter what, you would need a random selection of cards that are to be dealt before distributing them. Running the previous shuffle for only the necessary number of loops will do exactly this. Then it is faster to deal to the hands in order and sort after the fact than it would be to sort and then deal. This is true because: A) There are no failed deal attempts. B) It is faster to sort multiple smaller piles than on large one. This is the principle behind the "merge sort", a much more efficient sorting algorithm.

The moral of the story is that people are smarter than computers in a lot of ways, but or intelligence can sometimes get in the way. To avoid cheating in a card game we go through a process that is completely counter-productive every time that the cards need to be dealt. Computers don't need to think about what abusing their position could mean. In this case it means that they are can be a benevolent dictactor. It is also the reason everyone is terrified of a generalized artificial intelligence with poorly defined goals. I'll blog about that when I have a solution...