Entries Tagged 'Math' ↓
April 12th, 2019 — Math, Research, Visualization
Mergesort and Quicksort are two well-known sorting algorithms, familiar to both researchers and practitioners. Both are typically presented as examples of divide and conquer. Other than this fact, they are usually not seen to be very closely related. Let’s look at Mergesort and Quicksort in a somewhat unusual way that will make them appear as two sides of the same coin.
Suppose we want to sort a list of items \(a_1, a_2, \dots, a_n\).
In Mergesort, we split the list into two parts: \(L = a_1, \dots, a_{n/2}\), and \(R = a_{n/2+1}, \dots, a_n\). We recursively sort \(L\) and \(R\), then merge the two sorted lists to obtain the final, sorted output.
How do we sort \(L\) and \(R\) recursively? We split them further into equal parts (let’s assume for simplicity that \(n\) is a power of two, so this always works), until parts consist of single elements, which are already trivially sorted.
So, at the bottom of the recursion tree, we have \(n\) singleton-lists, consisting of \(a_1, a_2, \dots, a_n\). Assuming that the splitting has happened “in place”, these items are still stored in their original order, “sorted by their index” \(1,2,\dots,n\).
See the following illustration.
The bottom-most line contains the elements in their original order, one in each “box”. As we come out of the recursion, we merge neighboring “boxes” in pairs, putting their elements in the correct order. So within each box, elements are sorted “by value“.
Observe that neighboring boxes in the same line are sorted “by index“, in the sense that their ordering preserves the original order: indices in any box are smaller than indices in the next box to the right. For example in the second line from the bottom, the first box will contain \((a_1, a_2)\) or \((a_2, a_1)\), the next box will contain \((a_3, a_4)\) or \((a_4, a_3)\), etc. the point being that 1 and 2 are both smaller than 3 and 4.
This is Mergesort.
Can we reverse the arrow of time and go from the top towards the bottom? Well, that would be strange, because at the top we have a single box, so this would mean that we start with a single list that is already sorted, which wouldn’t make sense.
But what if by “sorted” we meant “sorted by index“, meaning that the entries are stored as \((a_1, a_2, \dots, a_n)\). There is of course no harm in assuming this, as this is indeed how the input is presented. So let’s update the figure.
Instead of bottom-to-top, let’s go top-to-bottom. Let’s also swap “by value” and “by index” throughout.
Now we go from a single box, sorted by index (meaning an unsorted input list) to a list of boxes sorted among them by value (meaning a sorted output list in the bottom). So conceptually, this looks very much like a valid sorting algorithm. What is it? Does it have a name?
Yes, it is Quicksort.
Let’s think it through again: we start with a list (sorted “by index”), and we produce from it two lists, sorted between them “by value”. This means that all entries in one are smaller than all entries in the other (this is essentially the partitioning step of Quicksort: we split the elements into those smaller, and those greater or equal than a pivot).
What does it mean that “within a box” the entries are now sorted “by index”? This just means that on both sides of the partitioning we preserve the original ordering of entries. This is not true in every implementation of Quicksort, but it is a technicality we can certainly enforce if we want to. Observe that we assumed that Quicksort splits the list in two equal parts, something we can achieve if we use the median as pivot (in practice of course a random pivot works just fine and is usually faster).
So there you have it. Quicksort and Mergesort are “duals”: you get one from the other by running time backwards, and swapping “by value” and “by index” in the invariants they maintain (or alternatively, swapping “between” and “within”). Yet another way to look at it is that Quicksort does a similar thing as Mergesort, but on the inverse permutation as input.
Surprising? Obvious? Is this view of practical use?
What about other sorting algorithms?
For the latter question, observe that we can extend this view to accommodate unequal splits, or in an extreme case, just splitting off (or merging) one element from/to a longer list. In this view, we obtain (essentially) InsertionSort, respectively SelectionSort, which also appear as duals of each other.
This latter duality extends further, and holds even if we do insertion, resp. selection by fancy data structures, such as self-adjusting binary search trees and adaptive tournaments as heaps. These ideas are explored in a different direction in an article here.
There are also other dualities on ordered sets and data structures (the word “duality” itself is quite overloaded). Exploring dualities often suggests new concepts by symmetry or analogy.
As for the discussion of this post, here is an illustration of both sides.
Illustration of the sorting duality.
August 17th, 2016 — Math, Puzzle
Consider the following sequence. Let the first element be \(a_0 = 0\).
Then, for \(a_i\), when \(i>0\), let’s do the following. Consider the last element \(a_{i-1}\). Did this element appear previously in the sequence? If not (meaning that \(a_{i-1}\) is its first occurrence), set \(a_i\) to \(0\). If \(a_{i-1}\) did appear before, then let \(a_k\) be its previous occurrence. Set \(a_i\) to the value \(i-1-k\) (meaning, “how far back did we see this value?”).
So, we have 0,0,1,0,2,0,2,2,1,6,…
For example, the last 6 is there because the previous element 1 has appeared 6 positions earlier.
This is known as Van Eck’s sequence and appears as A181391 in the wonderful On-line encyclopedia of integer seqences. It has been added to OEIS by Jan Ritsema van Eck in 2010. There are many interesting properties of this sequence (for instance, it was shown that it contains infinitely many zeros), and even more intriguing open questions. For more details see the OEIS entry, or these slides by OEIS founder N.J.A. Sloane.
Here is a variation on the above sequence:
Instead of counting the “number of elements”, let’s count only the “number of distinct elements”. That is, again we start with 0, and then if the last element was new, we write 0, otherwise we write “how many distinct elements” have we seen since its previous appearance.
So the sequence goes: 0,0,1,0,2,0,2,2,1,3,…
For example, the last 3 is there because between the previous element 1 and its last appearance there are 3 distinct elements: 0,1,2.
I thought of this sequence independently, but searching for it on OEIS I found that it has been considered before by Nathaniel Shar, who added it to the OEIS as A268755. See here for a plot.
The rule for defining the sequence is similar to the concept of “working set” in the theory of data structures, which refers exactly to the number of distinct queries seen since the previous occurrence of the last query. Therefore, I think a fitting name for the sequence would be “working set sequence”.
So what can we say about the working set sequence (a.k.a. Shar’s sequence)? Does it also contain an infinite number of zeros, similarly to the Van Eck’s sequence? I generated about half a million elements by computer, and the answer seems yes. It could happen, of course, that at some point the sequence reaches a cycle that it can never escape, such as 1,1,1,1,1,… or the less trivial …,1,2,3,3,1,3,2,3,2,2,1,3,3,… (exercise: check that this is really a cycle!)
However, the computer experiment suggests that this does not happen, and eventually every positive integer appears in the sequence.
Can this be proven formally? Yes! Let’s leave this as an exercise (I added a proof sketch to the OEIS entry). As a hint, an easier property to observe is that \(k\) can appear only after \(0,…,k-1\) have already appeared.
Theorem. The working set sequence contains infinitely many zeros.
Zeros seem to appear much less frequently in the working set sequence than in Van Eck’s sequence. But how frequent are they? Empirically it seems that there is a gap of roughly a constant times \(k\) between the \(k\)th and \(k+1\)th zeros, although nothing like this has been proven. Also, the zeros often come in pairs: if, for a long time there have been no new elements, eventually the new element \(k\) comes, followed by \(0\), and then, since \(0\) has also not appeared for a very long time, there is high chance that \(k+1,0\) follow right away. This seems to be the case quite frequently. Overall, however, nothing seems to be known about the statistics of this sequence, e.g. the asymptotic growth rate of \(a_n\).
The indices of new elements in the working set sequence (or alternatively, the indices of zeros minus one) have been added to OEIS as A275668.
I mentioned above two possible cycles in the sequence (although they don’t actually appear if we start from 0). What is the set of all possible cycles, does it have a nice characterization? How many different cycles are there with \(k\) distinct elements? These questions are open.
Finally, another open question about the working set sequence (as far as I know, the same question is open for the Van Eck’s sequence as well):
Problem. Does every pair of nonnegative integers (a,b) eventually appear in the working set sequence as a pair of consecutive elements? (except for (1,1) of course, which cannot appear).
[NOTE added August 2017] As pointed out by Jan Ritsema van Eck in a comment (see below), the Van Eck sequence also excludes pairs of the type (p,p+1). Prove it as a fun exercise, it is not difficult! So let’s modify the conjecture to also exclude (p,p+1) pairs besides (1,1). The conjecture doesn’t seem extremely plausible to be honest (as Jan points out, there are weaker statements one should prove first), so let’s formulate it as a challenge, to move things forward. Can you find an (a,b) pair not of the form (p,p+1) and not (1,1) that does not appear in the van Eck sequence A181391?
This observation does not apply to the working set sequence. For example, (4,5) appears at positions (21,22). So the similar challenge is: Can you find an (a,b) pair other than (1,1) that does not appear in the working set sequence A268755?
August 13th, 2015 — Math
hardy (adjective):
: able to live through difficult conditions (such as a cold winter or a drought)
: strong and able to accept difficult or unpleasant conditions
(Merriam-Webster)
G.H. Hardy was an eminent English mathematician, also known for his popular and influential book “A Mathematician’s Apology“.
In this extended essay he argues among other points that mathematics has a deep intrinsic beauty, and it is worthy to be pursued for its own sake.
I tried very hard to like this essay, but failed miserably on every occasion. It would be too easy to mock from our 21st century perspective Hardy’s mistake when he uses number theory as an example branch of mathematics with absolutely no practical applications (which by the way Hardy views as a huge positive). However, I also find the main point about mathematical beauty I mentioned before not very convincingly presented, even if you already agree with him (like I do) before opening the book.
But what I disliked most were Hardy’s strong opinions such as “there is no scorn more profound, or on the whole more justifiable, than that of the men who make for the men who explain. Exposition, criticism, appreciation, is work for second-rate minds”, and in another paragraph “it is a tiny minority who can do something really well, and the number of men who can do two things well is negligible” and finally the notorious “mathematics is a young man’s game”.
To express my dislike of this essay, I am making the modest proposal of a Hardy prize in mathematics, to be awarded to people who achieve significant mathematical results in ways contrary to the letter and the spirit of Hardy’s book (we can never know of course, whether Hardy himself would consider the results of any significance, although we can guess with good confidence.)
More positively, the fictional Hardy prize recognizes:
- pure mathematical work deeply influenced by practice and applications or by teaching and other expository work
- mathematicians (men or women) who started their mathematical career or achieved their most significant result at a relatively late age
- mathematicians who went about their mathematical career in a less than straightforward way, possibly having strong interests or accomplishments outside mathematics, possibly doing mathematics as a hobby
Ignoring historical examples (some of which Hardy must have known), as a small (and quite random) initial sample from our times I would suggest as first recipients: Persi Diaconis, Marjorie Rice, Preda Mihăilescu, Yitang Zhang, Raymond Smullyan.
This post is of course only half-serious (unless someone takes the initiative in such a direction). Would such a prize make sense? This post benefited from discussions with colleagues, who would probably prefer to stay unnamed. I was also informed by this post.
August 12th, 2015 — Math, Research
Recently, together with my colleagues Parinya Chalermsook, Mayank Goswami, Kurt Mehlhorn, and Thatchaphol Saranurak, we published three papers about binary search trees. All three are available on ArXiv as preprints. In this post I briefly and informally describe some results and ideas from these papers.
Binary search trees (BST) are perhaps the simplest data structures that are not entirely trivial, and they form the basis of several more advanced data structures and algorithms. BSTs are part of every computer science curriculum, and there is a rich theory describing their properties. Surprisingly, many aspects of BSTs are still not understood, so they are an active field of research.
Searching a single element in a BST is straightforward, and the cost of the search depends on how far the searched element is from the root of the tree. If we search more than one element, we might want to change the tree after each search, to make future searches more efficient. What is the best way to transform the tree after each search? The splay tree of Sleator and Tarjan from 1983 gives one possible such strategy. Splay trees have many nice properties, for example, they are as good as the best static tree for a given sequence (“static” here means not changed between accesses, and I am sweeping some details under the rug here). Sleator and Tarjan conjectured that even more strongly, splay trees are as good as any “dynamic” tree, i.e. any sequence of changes in the tree between accesses. This last statement is the famous dynamic optimality conjecture. If true, this would be quite surprising: splay trees just “react” to each access, without knowing what accesses will come next. The conjecture says that knowing the future doesn’t help much. Even the best strategy for changing the tree (tailored to a given sequence) cannot beat splay trees (again, some details omitted here). The conjecture is wide open. For more information about such questions I recommend the survey of Iacono. Now our recent results.
1. Self-adjusting binary search trees: what makes them tick?
Splay trees are considered quite mysterious: they execute some local transformations that just “happen to work”. We know that splay trees have many nice properties (including the static optimality mentioned before), but the proofs are a bit unintuitive. What is special about splay trees that gives them these properties? Are there other algorithms that have similar properties?
So we look at a general class of algorithms that preserve some properties of splay trees and relax others. Such an algorithm accesses an element in a BST, then transforms the search path into some tree, such that the accessed element becomes the root. We identify some simple conditions of this transformation that are sufficient to guarantee that the algorithm shares some of the nice properties of splay trees. We look at splay trees in a different way, that makes it obvious that they fulfill these conditions. We also show that some other known algorithms and heuristics fulfill these conditions, which explains why they are efficient. We also identify some new heuristics that fulfill the conditions but have not been studied before. Finally we ask, are our conditions necessary for a BST algorithm to be efficient? In a limited sense, we show that they are, but the results in this direction are not conclusive. See the paper for details.
2. Pattern-avoiding access in binary search trees
I mentioned that splay tree is conjectured to be as good as any other BST algorithm, even those that can see into the future, for any possible sequence. However, for most “random” sequences, even the theoretically best BST algorithms are quite inefficient, making the question vacuous in that case. The interesting cases are those sequences that have “some useful structure” that BST algorithms can exploit. So, if we want to show that splay trees (or some other algorithm) are efficient, we need to show that they do well on such “structured” input.
The literature on dynamic optimality describes a number of such “useful structures”. One example is “dynamic finger”: loosely speaking, if successive searches are for values that are close to each other, then search should be efficient. A special case of this is “sequential access”: if we just search the values in increasing order from 1 to n, then search should be really efficient.
In this paper we describe a new kind of structure, that has been studied in depth in mathematics, but not in the context of BSTs. We show that search sequences should be executable efficiently by a BST algorithm if they are free of some fixed pattern. See this page for a description of patterns. This generalizes the “sequential access” mentioned before in a different direction, and it also includes other known structures as special case.
We explore this topic in detail in the paper, giving some general results, and some stronger results for special cases.
3. Greedy is an almost optimal deque
Again, studying input with some structure, we look here at a sequence of insert and delete operations into a BST, with the restriction that we can only delete or insert “at the two ends”, that is at values that are the current maximum or minimum of the values in the tree. Such sequences are called deque sequences. It was conjectured a long time ago by Tarjan, that splay trees can execute deque sequences very efficiently (in linear time overall). This is not yet known, but the known results are so close to this, that the difference is only theoretically interesting (which makes it even more interesting :). In this paper (and the previous one) we look at a different algorithm, introduced independently by Lucas and Munro a few years after splay trees, and later extended by Demaine, Harmon, Iacono, Kane, and Pătraşcu. We show that this algorithm (called Greedy BST) is almost linear on deque sequences. This “almost” again hides a small factor that is only of theoretical interest, although a little bit larger than the corresponding factor known for splay trees. The details are in the paper.
December 6th, 2013 — Math, Puzzle
I first read the following wonderful little puzzle on Mathoverflow, where it was posted by David Futer. I don’t know who invented it or where it appeared first – if you have any information, please let me know. (Note: Toshihiro Shimizu points out the following link: AoPS)
There are 99 bags, each containing some number of oranges and some number of apples (any of these numbers can be zero). Show that we can always select 50 bags out of the 99, such that these 50 bags together will contain at least half of all the apples and at least half of all the oranges.
I like this puzzle because there are many different ways to solve it. Before reading on, do yourself a favor and try solving it yourself – it is not difficult, but very rewarding. If you find a proof that is different from those below, let me know and I’ll add it to the list.
——
Why look for different solutions, once we already have one? Partly because it’s fun, partly because each solution might give some new insight into the problem, partly because we can showcase different mathematical techniques, and perhaps most importantly, because each new proof lends itself to generalization slightly differently – so a new proof might lead to new results. The proofs below are sketchy in places, in order to keep them short – it can be instructive to try to fill in the gaps, or to try coming up with the proof after reading just the name of the technique.
Sometimes proving a more general statement is easier (at least psychologically – by removing distracting elements). In our case, it will be more “fruitful” to prove the following statement:
Given \(n\) bags that contain oranges and apples, we can select \( \lceil \frac{n+1}{2} \rceil \) bags such that they contain at least half of each type of fruit. (we only really care about the case when \(n\) is odd)
Here are all the proofs that I am aware of.
Proof 1. (sorting) (source: I vaguely recall seeing this proof on a math competition website but I can’t find it anymore. If you have any information, please let me know see AoPS link above).
Sort the \(n\) bags in decreasing order of the number of apples (break ties arbitrarily). Select the first bag, and from each following consecutive pair select the one with more oranges (break ties arbitrarily).
It is easy to see that the selected bags satisfy the required properties.
Proof 2. (induction) (source: idea from David Futer).
First we show that for \( n=3 \) the statement is true. Indeed, we can select the bag with the most apples and the one with the most oranges (if the two are the same, we also pick another bag).
Assume that the statement is true for \(n-2\). We then prove it for \(n\):
Consider a “good split” of the \(n-2\) bags into A and B, such that A has at least half of both fruits in A+B (such a split exists by the induction assumption). Now we try adding the other two bags, x and y.
If A has as many of both fruits as B+x or as B+y, we are done (we add the other bag to A).
Otherwise if B+x or B+y have as many of both fruits as A, we are done (we add both bags to B).
Otherwise if A has less apples than B+x and less apples than B+y we are done (add to A the one of x and y with more apples). Symmetric argument for oranges.
Otherwise if A has less apples than B+x and less oranges than B+y, it means that A has less of both fruits than B+x+y, which we treated already. Symmetric argument after swapping the terms “apples” and “oranges”. There are no other cases.
Proof 3. (contradiction or “canonical counter-example”)
Suppose there is no solution, then consider a set of \( \frac{n+1}{2} \) bags which has at least half of the apples (there is always such a set), and among such sets the one with the most oranges possible.
Let this set be A, and the remaining set B (A has at least half of apples and B has more than half of oranges).
Take the bag from A with the least oranges and move it to B. Now take the bag from B with the most oranges and move it to A. If we just moved the same bag back, it means that all bags originally in B have at most as many oranges as the bag with the least oranges in A. This contradicts the fact that B has more than half of all oranges.
Otherwise there are two cases: 1) the number of apples in A drops below half when we move a bag, which means that B becomes a solution, or 2) the number of apples in A remains at least half throughout, then after the swap we increased the number of oranges in A, which contradicts the statement that the chosen set had the highest such value.
Proof 4. (counting) (source: I read this proof written in probabilistic form by Erez Druk, after being asked by Dror Ozeri on a puzzle forum – as it is a closed forum, I cannot link to it)
I write this and the next proof in the n=99 case only, to avoid excessive notation. The number of 50-sets of bags is the same as the number of 49-sets \({99 \choose 49}\). Denote the number of 50-sets that contain at least half of all oranges by \(N_{50}\), and the number of 49-sets that contain at least half of all oranges by \(N_{49}\). We have \(N_{50}>N_{49}\), unless none of the bags contain any oranges, which is an easy case anyway (there is one-to-one mapping from 50-sets to 49-sets, such that each 50-set contains its mapped 49-set). Clearly, more than half of all 50-sets contain at least half of all oranges (otherwise the previous inequality would be violated: if an 50-set does not contain at least half of all oranges, then its complementing 49-set does).
The same argument holds for apples. As more than half of the 50-sets contain at least half of the oranges, and more than half of the 50-sets contain at least half of the apples, there is at least one 50-set containing at least half of both fruits.
Proof 5. (price function) (source: from David Futer)
Assign price \(x\) to an orange, and \(1-x\) to an apple. Select the 50 most expensive bags.
Denote:
\(f(x) = (\mbox{# of oranges in top 50}) / (\mbox{total # of oranges})\)
\(g(x) = (\mbox{# of apples in top 50}) / (\mbox{total # of apples})\)
We have \(g(0) \geq 1/2, f(1) \geq 1/2\), and \(f\) is increasing, \(g\) is decreasing. Clearly there is some \(x\) for which both \(f\) and \(g\) are at least 1/2, which denotes a solution (if for some \(x\) both functions would have values below 1/2, then the top 50 would not be the top 50, as its complement would have more apples and more oranges, therefore higher total price).
Proof 6. (geometry – Ham-Sandwich) (source: from David Futer, and a variant explained by Dror Ozeri on the same forum)
We represent the bags as sets in the plane, with the restriction that a straight line can cross at most two of them (this can always be achieved). We somehow encode the number of oranges and apples in each bag. There are more possible variants here: map each fruit to a red or blue point (depending on its type); or map all oranges of a bag to a single point with a weight that tells the number of oranges, and similarly for all apples of a bag; or map the whole bag to a single point for which we define an “apple-measure”, and an “orange-measure”. Now we apply the Ham sandwich theorem to obtain a balanced cut (there are different variants of this theorem, we choose the one suitable for our representation). If any “bags” are cut by the obtained line, we move them to the side which has less bags. This gives a valid solution.
Exercises:
- Prove the following more general statement. Which proof technique is suitable?
Given \( n\) bags that contain \( f\) kinds of fruits, we can select \( \lceil \frac{n+f-1}{2} \rceil \) bags that contain at least half of all kinds of fruits.
- Prove the following variant. Which proof technique is suitable?
Given 100 bags that contain apples and oranges, we can select 34 bags that contain at least one third of the apples and one third of the oranges.
- Find a general statement with \(n\) bags, \(f\) kinds of fruits and finding a subset of the bags that contains at least a \(p\)-fraction of the fruits of each kind, for arbitrary \( 0 \le p \le 1\).
- Assume that we know the contents of each bag. Which proof leads to an efficient algorithm for finding a good subset of bags in the sense described above? In particular, which proof leads to a linear time \(( O(n) )\) algorithm?
- Can you come up with new proofs for the puzzle? If so, I would love to hear them. Do any of the proofs admit new generalizations?
Thanks to David Futer, Lavinia Dinu, Manuel Arora, and Tobias Mömke for interesting discussions about this puzzle.
February 21st, 2013 — Math, Puzzle
Here’s a small puzzle/game that I made, using a simple geometric concept. You can try it here: disk intersection game. The game idea is loosely inspired by the game planarity, but it is also significantly different.
There are some disks, and you can move them around arbitrarily (drag with the mouse). The goal is to make all of them “good”. When a disk is “good”, its color turns to green. The other colors indicate different levels of “badness”, as shown in the legend to the right. With some experimentation you can probably discover what you need to do to achieve this, but here is an explanation of the idea:
There is a hidden “model”, created randomly, which you need to discover. The model determines how the disks need to be interconnected, meaning which pairs of disks have to intersect each other (overlap with each other). If you “realize” this model, by moving the disks in such a position as to intersect each other in the required way, you win the game. The colors of the disks indicate how far the current configuration is from the correct model. This is computed as follows: if a certain disk should intersect disk A and no other disks, but instead it intersects disk B and C, and it doesn’t intersect A, then its “badness” is 3, so it has the color corresponding to the number 3. In other words, “badness” shows how many errors there are due to a given disk. When the “badness” of all disks is 0, you have solved the puzzle.
If you find the color codes hard to recognize, you can click the “#” button, and the “badness” will appear as a number. You can also ask for help three times during the game. Clicking the help button will display the model that has to be realized (i.e. the connections between the disks, shown as lines). There are easy and hard games, in a hard game the model tends to be more complicated, therefore harder to discover.
Try the disk intersection game.
Remarks:
- I came up with the idea for the game, while working on a problem related to unit disk graphs. This is the first iteration of the game, so probably the gameplay and the design could be significantly improved. If you have any suggestions, please let me know in the comments.
- There is nothing special about disks here, maybe other intersection graphs would make more interesting (or more challenging) games.
- One reason I wrote the game, besides wanting to try out the idea, was to get familiar with HTML5 Canvas – in the future I’d like to make another, somewhat more complex game. As this was my first encounter with Canvas, and because I wrote the game in one sitting (~3 hours), the implementation is a bit rough around the edges. Please let me know if you find any bugs or if you have suggestions.
October 25th, 2012 — Math, Puzzle
Some years ago I was reading about the game of Nim, then a bit more about the theory of impartial games, and as the rabbithole seemed to go deeper and deeper, I needed some pretext to continue. So I came up with the rules of a simple paper-pencil game that I could try to analyze with the theory that I was reading about. I never managed to fully analyze the game, but it seemed to be reasonably fun to play anyway. Later I found that some very similar games have already been described.
The rules are simple: two players take turns in marking squares on a grid (say, of size 8*8). You can only mark a square if all of its neighbors (including the diagonal neighbors) are empty. The first player unable to move loses.
More details of the game are here.
Recently, I had the pleasure of discovering a website that contains the descriptions of many paper and pencil games, most of them playable on the website against the computer. The site is maintained by David Johnson-Davies. In fact, I found out about the website from him, when he told me that he also included the game I described above. The game is filed under the name “Obstruction game” which very well describes what it is about. You can even play it against the computer. Here is the website with all the games and here is the link to the Obstruction game. There is even a description with some useful strategy-tips.
Happy playing!
June 6th, 2012 — Math, Puzzle
Suppose there are n sports teams, some of them play each other, in each game one team wins and the other loses and in the end we want to order the teams such that if A beats B than A is before B in the order. If two teams A and B played each other multiple times but, say, A clearly dominated B, we consider as if they would have played once and A beat B. If they played each other several times and there is no clear winner, or if they played only one draw, we consider as if they would have played twice and once A won, once B.
We can represent the whole situation as a directed graph with n vertices {1, …, n} and an edge from i to j if i won against j. Clearly, there are no self-loops. If the graph has no cycles, we can efficiently compute a topological sorting of the vertices and we are done. What if the graph has cycles (two teams that played a draw, or a sequence of teams where A beat B, B beat C, …, X beat A), but we still want to output a reasonable ordering? What is a reasonable ordering?
This is the vertex ordering problem and we can formulate many different criteria. I’ll go through some variants and mention for each of them, whether they are NP-hard or polynomially solvable. It turns out that almost all of them are NP-hard, and those which are not are quite trivial (plus the topological sorting mentioned before, which is also polynomially – in fact, linearly – computable). I’ll try to refer to sources where I found some, where I didn’t find any, the result is simple enough to be considered folklore. Not all the criteria that I list are reasonable – a reasonable criterion should actually lead to the topological sorting when there are no cycles. Did I forget some sensible criteria? Let me know what you think.
The general set-up is the following: we search for a permutation f:[n] -> [n], such as to minimize or maximize some quantity as follows:
1. The first criterion is that we want to minimize/maximize some reasonable function of the number of wins and the number of losses of a team. This includes the methods used in most sport tournaments where wins and loses are worth some number of points and in the end the teams are ranked by their accumulated number of points. This criterion is easy, because we can just compute the indegree/outdegree of each vertex, compute the necessary function values and sort vertices according to it.
Now we move to some more interesting criteria.
- We either minimize (MIN) or maximize (MAX) some quantity.
- The quantity is either the SUM, the MAXimum, the MINimum or the COUNT of something.
- That something is either the length of EDGEs, the length of BACK-EDGEs, or SIGNED length of EDGEs.
This gives 2*4*3=24 cases.
For example, if i->j means that there is an edge from i to j:
MIN COUNT EDGE refers to
MAX SUM EDGE refers to
MIN SUM BACK-EDGE refers to
MAX MIN SIGNED-EDGE refers to .
These are all for directed graphs, as mentioned earlier. Before starting to enumerate all these cases, let’s define the same problem for undirected graphs. For undirected graphs there are less cases, as we don’t have to consider separately any edge or back-edge or signed edge. Thus there are just 2*4=8 possibilities here.
Now let’s see what we’ve got, numbering continuously all the cases.
Undirected graphs
2. MIN SUM: This is called Minimum Linear Arrangement, a known NP-hard problem.
3. MIN MAX: This is called Minimum Bandwidth, a known NP-hard problem.
4. MIN MIN: makes no sense, it can always be 1.
5. MIN COUNT: makes no sense, it is always the number of edges.
6. MAX SUM: this is equivalent to 2. on the complement graph, therefore NP-hard.
7. MAX MAX: makes no sense, it can always be n-1.
8. MAX MIN: This is the anti-bandwidth problem, studied in literature, see here, or here (leads to paywall). It is NP-hard. This can be seen via the following reduction: “Does G have a Hamiltonian path?” is the same as “can the vertices of G be arranged such that there are edges between neighbors?” is the same as “can the vertices of G be arranged such that the complement of G has anti-bandwith at least 2?”. If we could answer the last question efficiently, we could answer the first.
9. MAX COUNT: makes no sense, it is always the number of edges.
Now we move on to the original question:
Directed graphs
10. MIN SUM EDGE: NP-hard. From 2. if we replace every undirected edge with a directed edge.
11. MIN SUM BACK-EDGE: NP-hard. From 2. if we replace every undirected edge with two directed edges.
12. MIN SUM SIGNED-EDGE: this is quite interesting. Notice that we can replace a path a->b->c with a single edge a->c without changing the sum. If we do this sufficiently many times we will be left with vertices with only incoming or only outgoing edges. To minimize the sum we clearly have to put vertices with outgoing edges first, then vertices with incoming edges. Otherwise we could make a simple replacement that would decrease the sum. Furthermore, vertices with outgoing edges have to be ordered in decreasing order of degree, vertices with incoming edges ordered in increasing order of degree. Again, otherwise, a simple swap would decrease the sum. That’s all.. Observe that what we’ve got is the same as ordering the original vertices in decreasing order of [outderee-indegree]. In this sense, this is a special case of 1.
13. MAX SUM EDGE: NP-hard. From 6. if we replace every edge with a directed edge.
14. MAX SUM BACK-EDGE: NP-hard. From 6. if we replace every edge with two directed edges.
15. MAX SUM SIGNED-EDGE: solution is the reverse order of 12.
16. MIN MAX EDGE: NP-hard. From 3. if we replace every undirected edge with a directed edge.
17. MIN MAX BACK-EDGE: NP-hard. From 3. if we replace every undirected edge with two directed edges.
18. MIN MAX SIGNED-EDGE: NP-hard. From 3. if we replace every undirected edge with two directed edges. Known as Minimum Directed Bandwidth.
19. MAX MAX EDGE: makes no sense, it can always be n-1.
20. MAX MAX BACK-EDGE: makes no sense, it can always be n-1.
21. MAX MAX SIGNED-EDGE: makes no sense, it can always be n-1.
22. MIN MIN EDGE: makes no sense, it can always be 1.
23. MIN MIN BACK-EDGE: makes no sense, it can always be 1 and has to be at least 1 if there are cycles.
24. MIN MIN SIGNED-EDGE: makes no sense, it can always be -(n-1).
25. MAX MIN EDGE: NP-hard. From 8. if we replace every undirected edge with a directed edge.
26. MAX MIN BACK-EDGE: NP-hard. From 8. if we replace every undirected edge with two directed edges.
27. MAX MIN SIGNED-EDGE: NP-hard. From 8. if we replace every undirected edge with two directed edges.
28. MIN COUNT EDGE: makes no sense, it is always the number of edges.
29. MIN COUNT BACK-EDGE: NP-hard, known as Minimum Feedback Arc Set.
30. MIN COUNT SIGNED-EDGE: makes no sense, it is always the number of edges.
31. MAX COUNT EDGE: makes no sense, it is always the number of edges.
32. MAX COUNT BACK-EDGE: NP-hard, it is the reverse order of 29.
33. MAX COUNT SIGNED-EDGE: makes no sense, it is always the number of edges.
Phew :) Did I forget any interesting criteria?
From the point of view of the original motivation (a ranking of teams) the criteria that seem to some extent reasonable are: 11, 12, 17, 18, 29.
May 1st, 2012 — Math, Puzzle
Let’s revisit the topic of the previous post on self-counting sentences. We looked at sentences like this:
In this sentence the number of occurrences of 1 is __, of 2 is __, ..., of n is __.
Let’s go up one more step on the ladder of abstraction, to arrive at sentences of the type:
In this sentence, there are 3 numbers appearing 1 time, 1 number appearing 2 times, 1 number appearing 3 times, 0 numbers appearing 4 times.
In this sentence, there are 2 numbers appearing 1 time, 3 numbers appearing 2 times, 0 numbers appearing 3 times, 0 numbers appearing 4 time.
Observe, that both say the truth about themselves. Below are three puzzles. In all cases the task is to fill in the gaps such as to make the sentence say the truth and the questions are: for what values of n does a solution exist and what are the solutions (if they exist).
In this sentence there are __ numbers occurring 1 time, __ numbers occurring 2 times, ..., __ numbers occurring n times.
In this sentence, from the numbers 0,1, ..., n-1, there are __ numbers occurring underlined 0 time, __ numbers occurring underlined 1 time, ..., __ numbers occurring underlined n-1 times.
In this sentence there are __ numbers occurring underlined 1 time, __ numbers occurring underlined 2 times, ..., __ numbers occurring underlined n times.
Note: I submitted the first variant of the problem to Kömal, where it appeared in the 2013 September issue (Hungarian, English – unfortunately, as of this writing, the English version appears mistaken there). Later, Toshihiro Shimizu pointed out that a formulation essentially equivalent to the second variant above appeared earlier as an IMO problem (Combinatorics 5)
.
.
.
.
.
.
The second problem is equivalent to the first problem. There exists a solution to problem 1, if and only if the same sequence of numbers is a solution to problem 2 (note that in problem 2 the counting starts from 0). Therefore we only look at the first and the third problems.
Solution of problem 1:
For small values of n, we can find the solutions by trial and error or with a small program. We denote the numbers of the solution as f(1), f(2), …, f(n).
For n=1, there is a single solution: f(1)=2.
For n=2, there is a single solution: f(1)=4, f(2)=0.
For n=3, there is a single solution: f(1)=4, f(2)=1, f(3)=0.
For n=4, there are two solutions (see the beginning of the post).
For n=5, n=6, there are no solutions.
For n=7, there are two solutions: f(1)=4, f(2)=3, f(3)=0, f(4)=1, f(5)=0, f(6)=0, f(7)=0, and f(1)=5, f(2)=1, f(3)=1, f(4)=1, f(5)=0, f(6)=0, f(7)=0.
For n=8, there are two solutions: f(1)=5, f(2)=2, f(3)=1, f(4)=1, f(5)=0, f(6)=0, f(7)=0, f(8)=0, and f(1)=5, f(2)=3, f(3)=0, f(4)=0, f(5)=1, f(6)=0, f(7)=0, f(8)=0.
For every n>=9 there are exactly three solutions:
a) f(1)=n-3, f(2)=3, f(n-3)=1, and the remaining values 0,
b) f(1)=n-2, f(2)=1, f(4)=1, f(n-4)=1, and the remaining values 0,
c) f(1)=n-3, f(2)=2, f(3)=1, f(n-4)=1, and the remaining values 0.
These solutions clearly work. Let’s prove that no other solutions exist.
We can observe two properties of the sequence f(i):
Property 1: ∑ i * f(i) = 2n. (where i goes from 1 to n)
Proof:
There are 2n numbers in the sentence and each i*f(i) term counts how many numbers are there with multiplicity i. To have a multiplicity greater than n would mean that all gaps are filled with the same value, which cannot yield a true sentence for any value. Thus the possible multiplicities are between 1 and n, therefore ∑ i * f(i) gives the total number of numbers in the sentence, which is 2n.
Property 2: ∑ f(i) = n+1. (where i goes from 1 to n)
Proof:
Clearly, for all i, f(i)>=0 and f(i)<=n.
If for all i, f(i)>0 would hold, ∑ i * f(i) would be larger than 2n, violating Property 1. Therefore, some of the f(i) values have to be equal to 0, which means that all numbers 0,1,…,n appear in the sentence. Furthermore, all numbers that appear in the sentence have multiplicity between 1 and n. Thus, ∑ f(i) gives the total number of distinct values, which is n+1.
Now we prove that a) b) c) are the only solutions for n>=9.
If f(1)>n-2,
it must be that f(1)=n-1, but now n-1 occurs twice, so to maintain that n-1 values occur once, we need to write n-1 in every other position as well. This is not a correct solution.
If f(1)=n-2,
we must have exactly one other nonzero value (besides n-2) written in the gaps. We cannot write n-2 anywhere else, as that would violate Property 2. To get a sum of n+1, the possibilities are writing one time 3, or three times 1. Writing 3 anywhere doesn’t make the sentence correct, so it remains to write three 1’s. But then the number of 0’s will be n-4, the number of 1’s will be 4 and the number of (n-2)’s will be 2. This gives us solution b).
If f(1)=n-3,
we have two other nonzero values (besides n-3) written in the gaps. They also have to add up to 4 (Property 2). That can be as 3+1 or as 2+1+1. The first case means that 1,3 and n-3 have multiplicities 2 and 0 has multiplicity n-3. The second case means that 2 and n-3 have multiplicity 2, 0 has multiplicity n-4 and 1 has multiplicity 3. These cases uniquely lead to solutions a) and c).
If f(1)<n-3,
suppose f(1)=n-k (where k>3). The value 0 has to appear more than one time (otherwise the sum of Property 1 would be too large). Therefore, among the values 1, …, n, there are k distinct values which are written somewhere in the gaps (to make their multiplicities larger than one in the whole sentence). We know that n-k appears in the first gap, so we have k-1 other values in the gaps starting from the second. Let’s denote these values by a1 < a2 < . . . < ak − 1. Since they are written in the gaps after the first one, it follows that there are at least a1 + a2 + . . . + ak − 1 distinct numbers that appear more than once in the sentence. Even if one of them is n-k, and one of them is 0, that still leaves S = a1 + a2 + . . . + ak − 1 − 2 distinct numbers appearing more than once (therefore, also in the gaps). Since a1 ≥ 1, we have S >= k(k-1)/2 – 2. Even if the smallest of these numbers is 1, we get that ∑f(i) >> n+1, for k>3. This violates Property 2, therefore no solution of this kind is possible.
Solution of problem 3:
For n=1, we have f(1)=1, unique solution.
For n=2, we have f(1)=2, f(2)=0, unique solution.
For n=3, we have f(1)=1, f(2)=1, f(3)=0, unique solution.
For n=4, we have f(1)=2, f(2)=1, f(3)=0, f(4)=0, unique solution.
For n>=5, we have exactly two solutions:
a) f(2)=1, f(n-2)=1, the remaining values 0
b) f(1)=2, f(n-2)=1, the remaining values 0.
Proof:
Similarly to the first problem, now we have the property:
∑ i * f(i) = n.
Clearly f(n)=0, otherwise all values would have to be equal, which is not a correct solution.
Also, f(n-1)=0. The only alternative would be f(n-1)=1, and all other values equal to zero, but this is not a correct solution either.
If f(n-2)=1 we need that ∑ i * f(i) over all other values is 2. This leads to solutions a) and b) only.
f(n-2)>=2 is not possible, as it would violate the property for n>=5.
Now suppose f(n-k) is the last nonzero value, for k>=3. This means that there are k numbers that are not equal to one of the most frequent values. If 0 is one of the most frequent values, then there are k nonzero values. All of them cannot be equal to 1, as that would not be a correct solution, so ∑ i * f(i) >= k(k-1)/2 + n-k + 1, which is larger than n for k>=3, violating the required property. On the other hand, if less than half of the elements are 0, the property is clearly violated, which shows that there are no other solutions.
April 30th, 2012 — Math, Puzzle
In this sentence the number of occurrences of 1 is 2, of 2 is 3, of 3 is 2, of 4 is 1.
This self-referential, true sentence is a variant of a puzzle attributed to the logician Raphael Robinson, popularized in Douglas Hofstadter’s book Metamagical Themas. In the book the puzzle appears in the following form:
Fill in the gaps such as to make the sentence true:
In this sentence the number of occurrences of the digit 0 is __, of digit 1 is __, of digit 2 is __, ..., of digit 9 is __.
With some trial and error you can find two different solutions, and it has been shown that no other solutions are possible. The problem has also been studied using some theory from the field of dynamical systems. Here are two interesting links, both containing the solution of the puzzle.
In this post, however, I would like to look at a slightly different puzzle. Instead of counting digits, I am counting the numbers as indivisible units. If, for instance, “15” appears in the sentence, I count it as “15”, not as one “1” and one “5”. In this way the problem is independent of the base in which the numbers are represented.
In both of the following two puzzles the task is to fill in the gaps such as to make the sentences true. In the first sentence the values can be between 1 and n (including 1 and n), in the second sentence the values can be between 0 and n-1 (both included). For what values of n are the problems solvable? How many solutions are there?
In this sentence the number of occurrences of 1 is __, of 2 is __, ..., of n is __.
In this sentence the number of underlined occurrences of 0 is __, of 1 is __, of 2 is __, ..., of n-1 is __.
Think about the puzzles for some time, or scroll down for the solution. Let me know if you find a simpler solution. I also wrote a follow-up post, with a different variant of the problem.
.
.
.
.
.
.
It can be shown that the two problems are equivalent: let’s denote the numbers written in the gaps of the first sentence by f(1), …, f(n). It can be seen easily that f(1), …, f(n) form a solution for the first problem if and only if f(1)-1, …, f(n)-1 form a solution for the second problem. So it’s enough to look at the first puzzle only.
With some case-based analysis we can show that there is no solution for the cases n=1, n=2, n=3 and n=6.
For n=4, there are two solutions, the one given in the beginning of the post and the following: f(1)=3, f(2)=1, f(3)=3, f(4)=1.
For n=5, the solution is given by the sequence: f(1)=3, f(2)=2, f(3)=3, f(4)=1, f(5)=1 and it is unique. These can be verified manually.
For every n>=7, there is a solution, given by f(1)=n-3, f(2)=3, f(3)=2, f(n-3)=2, and f(k)=1, for every k other than 1,2,3, and n-3. Let’s prove that this is the only solution:
Suppose there is some other solution.
If f(1)=n-3, then f(n-3)>=2.
—> If f(n-3)=2, then we need f(2)>=3.
——> If f(2)=3, then we get the same solution.
——> If f(2)>3, then there exist distinct x and y (also different from n-3, 1 and 2), for which f(x)=f(y)=2, so we have a value different from 1 at places 1,2,x,y,n-3, which means that we don’t have enough places to put 1’s, to satisfy f(1)=n-3.
—> If f(n-3)>2, then we have some x (different from 1 and n-3), for which f(x)=n-3. But then there are n-4 places where we need to put x, again not leaving enough places to satisfy f(1)=n-3.
If f(1)>n-3, then f(f(1))>=2.
—> If f(f(1))=2, then we need f(2)>=3 and f(f(2))>=2. We ran out of places for 1’s, since 1,2,f(2) and f(1) are distinct and they map to values different from 1.
—> If f(f(1))>2, then there is some x (other than 1) with f(x)=f(1)>=n-2, which means that we need to fill at least n-3 places with the value x, not leaving enough place for the 1’s.
The interesting case remains when f(1)<n-3. This condition means that in at least 5 places we need to write a number different from 1.
Suppose there are exactly k places, corresponding to the indices x1, …, xk, where we have an entry different from 1. This means that f(x1), …, f(xk) are different from 1 and k>=5. Let x1=1, since we know that in place 1 we have an entry different from 1.
Observe that among f(x1), …, f(xk) there have to be exactly k-1 distinct elements. This is because we need a value different from 1 in all places 1, f(x1), …, f(xk), and unless there are exactly k such places, we reached a contradiction. Since we have k-1 distinct elements all different from 1, the maximum among them is at least k and the second largest element is at least k-1. This means that there is a number t different from 1, such that f(t)>=k-1, therefore, t appears as a count in at least k-2 places. If out of k counts we have k-2 equal, we cannot have k-1 distinct values, so we reached a blatant contradiction. Therefore this case is impossible and the solution given above is unique.