The statement is the first statement I am looking into, and it can be analysed by studying appropriate subsequences, as will be demonstrated below. For the sake of completeness, its definition is included here.

Definition.

: There exists a configuration of the cards that is unwinnable with the alternative ruleset and without 10-tricks.

The alternative ruleset

The ruleset dictates that: no cards are drawn to begin with, and a player must play the top card of the deck as drawn.

In the end of the discussion on forming simpler questions, it is explained how ; if a configuration is winnable under this ruleset, is winnable with the original ruleset.

Claim.

is equivalent to there existing at most 4 mutually exclusive monotone subsequences (two increasing and two decreasing) for each configuration of the deck.

To see that this holds, consider the forward direction first. If holds, then the cards can be arranged in the monotone piles, as the removal of the 10-tricks implies that piles can only be monotone. We require that the subsequences are mutually exclusive because each card is played only once. Therefore, we can extract at most 4 such monotone (two increasing and two decreasing) subsequences. Conversely, if for some configuration there exist 4 such subsequences, it follows that that this configuration is winnable.

This leads to the following question:

What is the lowest upper bound for the number of mutually exclusive monotone subsequences for any card configuration?

This number corresponds to the minimum number of piles that is necessary for the game under to always be winnable. This provides thus an upper bound for the number of piles that is necessary for the game under to be winnable, but there is no guarantee that this is the lowest such bound. In fact, it is most likely not the lowest such bound.

is some shuffling of the cards, and can be thought of as just a permutation of the numbers 2 to 99. It is simpler to translate this as a permutation of the numbers 1 to 98, and no information is lost - we are simply thinking parallel to the context of the game.

Let denote the set of permutations of the numbers . That is, if is an element of (or in simpler English, belongs to) , then is simply a permutation of the numbers , and contains in it all the possible such permutations. It is a standard mathematical exercise to show that contains elements. Notice now that in the context of the question, is equivalent to an element of .

Definition.

For a given sequence , denote by the minimum number of subsequences that partition . Then, let be defined as In words, is the maximum number that attains over all the elements of .

Thus the lowest upper bound for the number of monotone subsequences that partition is given by .

The first thing I tried to do to familiarise myself with this question is try out on paper some smaller examples. Very quickly we can see that .

After the first few small numbers, this becomes computationally a bit difficult and annoying, for a human. I thus decided to ask python to do the hard work for me. This type of task is one I often find very enjoyable, where one has to write concrete logical steps to something that may not be typically broken down in that way in our thoughtscape. After getting some code to do the hard work, it became apparent that this would soon also become computationally expensive for a computer. This is quite obvious, as there are combinations to check to determine the maximum, and this is a quantity that grows incredibly rapidly with increasing .

In order to determine (an upper bound for) , I calculated (an upper bound for) for the first elements of (by abused called here), and then took the maximum of all these numbers. The way I determined (an upper bound for) is as follows:

Algorithm for determining an upper bound for

  1. Determine the longest monotone subsequence in , call this
  2. Remove all the elements of from
  3. Repeat 1 and 2 until no elements are left in

Then an upper bound for is the number of subsequences determined, or more simply the amount of times this procedure was repeated.

What happens here is essentially that the longest subsequence (or strictly speaking a longest subsequence) is determined at every step, and removed from . Thus each subsequent removed subsequence is smaller than the previous one. It is clear that this procedure gives an upper bound for . The reason I cannot claim that this is the best such bound, is because the longest monotone subsequence need not be unique!

We can see this for example in the sequence . This has two increasing subsequences that are of the maximum length - 3. None is better than the other, in this case. But, it creates some decisions that need to be made when implemented in general, and decisions are not a good thing. This is because all the different situations need to be examined, and then the smallest such determined. The complexity of this problem is already high enough that I did not pursue this.

Continuing with this sub-optimal algorithm, the first thing I had to ask this algorithm to do was to determine a monotone subsequence that is as long as possible. This was successful, albeit very slow, due to the many comparisons that had to be performed at each step.

After some research, I found an algorithm published by Fredman (1975) and credited to Donald Knuth, which determines the length of the longest increasing subsequence. I was both unsurprised and happy to read this, as Knuth is a person I find truly inspiring in so many aspects of his work and especially attitude. Apart from this, in the aforementioned paper, it is shown that this comparison algorithm has optimal performance for a comparison algorithm- it is performing “as well as it could in this situation”.

Words of praise to the side, I will quickly outline the variant of this algorithm that was implemented to determine the longest increasing subsequence.

Algorithm for determining a longest increasing subsequence

Idea: Take a sequence , and keep tabs on the ‘best’ sequences of each length produced by moving forward in the elements of in increasing . (Best here refers to the sequence that has the smallest end value, as that is the sequence that has the most possible values that can be appended to it.) Start by putting into the best sequence of length one. Then, for each from to , if is larger than the last entry of the longest best subsequence, then create the new longest best sequence, by appending to this previously longest best sequence. Otherwise, replace the last element of a smallest best subsequence by if that creates a new best subsequence.

In detail: Let be a sequence of distinct numbers of length . We will denote by some sequence, and its elements by , where ranges from to .

Then define the sequence iteratively as follows: (note: we refer to the -th element of by )

Start by setting .

Then, for each , do the following:

If for the greatest for which exists, then create by setting and for . This simply corresponds to being the sequence with the element appended to the end.

Otherwise, it means we cannot create , but we can improve some where .

To continue the procedure, starting from and working backwards to , check whether and for any . At the point that this condition is satisfied, stop this process and create the new by appending to in a similar manner to the case above. If there is no such for which this is satisfied, then it must be that , in which case now set , which updates accordingly.

When all the elements of have been exhausted, the longest increasing subsequence is given by for the largest .

As has been mentioned already, it is important to highlight here that this longest subsequence is not unique. There are some alternative steps one could take at some points that would give a different subsequence, which agrees with the truth that no single longest subsequence needs to always exist.


This algorithmic approach can only be so effective. I have no real hope that this code will give me the answer, but it was a good thing to think about. At the same time, it can give me some intuition about the first few values. These are:

21
32
42
53
63
73
84
94
104
115

Notice that the rate at which the upper bound obtained for increases is not constant. At the same time, these values are a big indicator that it is very unlikely that .

However, instead of searching for an upper bound for , I could at least determine a reasonable lower bound by testing the results from shufflings of 98 cards. If the problem of non-uniqueness was not there, I would be confident that this was indeed a true lower bound; after repeating this experiment 100 times, I have some confidence that . The code can be amended to check for cases where there exist more than 1 solutions: once a longest subsequence has been determined, run a check to determine all subsequences of this length. If a separate one is found, then we know there are more cases. This is something that didn’t seem worthwhile at this time, so I did not implement it, since at the same time a more constructive idea surfaced.

While looking at things related to the longest increasing subsequence, I came across a theorem published in 1935 by Erdős and Szekeres (to which I cannot find a secure link, but a search involving those names will give you access to all the unsecure links I had access to), which said the following (in the language we are using in this situation):

Theorem (Erdős-Szekeres).

If a sequence is at least of length for some , then there exists a monotone subsequence of length in .

This result can be generalised as follows: If a sequence is at least of length for some , then there exists a monotonically increasing subsequence of length or a monotonically decreasing subsequence of length in .

There exists a simple and easy to understand proof of this statement that requires hardly any mathematical knowledge, which I will present here in slightly more detail in hope that it is widely accessible. This is taken from the 1-page paper by A. Seidenberg, A simple proof of the Erdős-Szekeres Theorem. (Much like The Game, this paper does exactly what it says on the tin.)

This proof makes use of what is called “The Pigeonhole Principle”: the concept that if you have things to put into boxes and , then at least one box will have more than one thing in it.

Proof. Consider a sequence of length , where for some integers and . To each term attach a label , where denotes the length of the longest increasing sequence ending in , and the longest decreasing sequence ending in . For example, if then we have:

Explaining this in words, for the first term, we have that , and trivially the only subsequences that can end in the first term can only be of length . Then, for , the longest increasing subsequence ending in that is , which is of length , and the longest decreasing subsequence is just since . Then continuing in a similar fashion we see that the longest increasing subsequence ending in is given by , and the longest decreasing is just ; the longest increasing subsequence ending in is given by , and the longest decreasing is or , which are both of length .

Then consider two distinct numbers and such that . Then, only one of or can be true. Let us examine these two separate cases:

Case 1:

What this means is that there is a number in the sequence further to the right of that is smaller than it. Therefore, we can deduce that , that is, the longest increasing subsequence that ends in is longer than the longest increasing subsequence that ends in . That is because we can append to the largest increasing sequence that ended in , since is greater than .

Case 2:

What this means is that there is a number in the sequence further to the right of that is greater than it. Therefore, we can deduce that , that is, the longest decreasing subsequence that ends in is longer than the longest decreasing subsequence that ends in . That is because we can append to the largest decreasing sequence that ended in , since is smaller than .

But, this implies the following: no two terms can have the same label, since either the first part or the second part is different, as we observed in the two cases analysed above.

If we assume that there are at most choices for and choices for , then pairing up all this distinct choices gives a total of choices for the labels . But since there are distinct labels (since our sequence has terms), this assumption cannot be true. Therefore there are either more than choices for , or more than choices for . Consequently, there must either exist an increasing subsequence of length at least , or a decreasing subsequence of length at least .


This theorem is the key to a much easier and robust way of determining . Contrary to the algorithmic method above, we can obtain instead of just an upper bound for it. This is done analogously to the upper bound algorithm, where a method for determining a longest subsequence is applied until no more elements are left in the sequence. We outline this below:

Algorithm for determining

  1. Set
  2. Find the largest such that
  3. Set to
  4. Repeat 2 and 3 until is set to 0

Then is equal to the amount of repetitions.

What happens here, is that the Erdős-Szekeres Theorem is applied successively to determine the length of the longest subsequence that is guaranteed to exist for a sequence of numbers of length . Then, imagine that these cards are taken out, once the sequence has been found. This leaves us with cards, where here denotes the length of the subsequence. This process is then repeated until no numbers left in the sequence, or respectively, cards in the deck. We don’t actually care about the specifics, for numbers (resp. cards) can be relabelled without loss of generality, provided that the order is preserved (for example, if the number 11 has been removed, then relabelling 12 to 11 doesn’t change anything in the abstract logic of the theorem).

Why is this procedure giving ?

It is clear that this procedure gives an upper bound for , but it is not immediate that this is the lowest bound as well. To see that, notice that it works for the worst case scenario, that is, when is achieving the maximum. Let denote the length of the longest monotone subsequence of a sequence of length . Since we have chosen such that it is the largest possible such that , this immediately implies that . We therefore know that we are guaranteed (by the E-S Theorem) that the sequence of length has a monotone subsequence of length , thus . But, it is not guaranteed that it has a monotone subsequence of length (and such an example can always be constructed in the vein of the proof of the theorem), thus . Combining these two inequalities we see that if , then necessarily .

This is a very quick thing to check now, and the python code below does just that:

"""
Algorithm for L(N).
"""
 
from math import sqrt
  
 
def L(N, description=False):
    M=N
    n = int(sqrt(M-1))
    i = 0
    if description:
        print(f'Determine L({N}):')
        print(f'The greatest integer whose square is less than M-1={M-1} is {n}.')
 
    while M>0:
        if description:
            print(f'{M} - ({n}+1) = {M-n-1}, so set M to {M-n-1}.')
        i+=1
        if description:
            print(f'At this stage, we have removed the longest subsequence {i} times.')
        M -= n+1
        if M <= n**2:
            n-=1
        if description:    
            if M>0:    
                print(f'The greatest integer whose square is less than M-1={M-1} is {n}.')
    return i
 
print(L(98), description=True)

This snippet is taken from my github folder, which also contains an implementation of the previous algorithm. You can run this without the description to simply get the result for any you are interested in.

Running this for , gives us that . The explanations why are shown below:

This answers the question then, in order for the game under to be always winnable, we need at most 16 piles. However, from the statement of the theorem we cannot know how many of these are ascending or descending. Thus the most general answer we can give is 16 ascending and 16 descending piles. These are a lot more piles than just the 2 of the original game! However, is a lot more restrictive than so this is not disheartening. As explained above, this hasn’t shown us anything about the truth of , but we may use this strategy to that end. In conclusion we have shown that

Conclusion.

is True; an unwinnable configuration exists under .