Compressing modifications to permutations

Tags: math

Let’s say that you have two permutations π and σ where both permutations are roughly equal (“roughly equal” is an intentionally vague term). You may want to compress the modifications to the permutation, formally:

  1. Given π and σ which are permutations on n elements, we want a compression algorithm C(π, σ) which outputs a “small” bitstring. This should be paired with…
  2. …a decompression algorithm, such that D(π, C(π, σ)) = σ.

This problem sounds esoteric, but it’s actually relatively common. For example, in real time collaborative editing, one user may rearrange items in an array. To propagate the edits to another user, we’ll want to send them instructions about how to reorder their list in the same way. One way to solve this is to send the result of the compression algorithm.

The most common way I see this solved is via sending a series of instructions of the form “move the element at index x to index y”. Of course, such a representation may not be the most efficient, depending on the specifics of how many such swaps occur. In particuar, this takes 2klog2n bits to encode k swaps on an n element permutation.

Another simple method is to define C(π, σ) to be the Lehmer encoding of σ, and then to decompress by decoding the Lehmer encoding. This works, but it takes Θ(n log  n) bits and completely ignores π. Information theoretic bounds tell us that we can’t do better than log (n!) = Θ(n log  n) in the average case, which this method does achieve, but we want something which works well when π and σ are roughly equal. Can we do better?

Compression from Sorting Algorithms

Consider a deterministic comparison sorting algorithm on the set S. The sorting algorithm requires a comparison relation  < (a, b) : S × S → {0, 1}, which returns 1 if the first input is less than the second and 0 otherwise.

From here on, we’ll talk about permutations as if they are expressed in one-line notation. For concreteness, we’ll use π = (1 2 3) and σ = (1 3 2).

We can construct a compression algorithm from any comparison sorting algorithm. Define <σ(a, b) as 1 iff a appears before b in σ. Note that applying a sorting algorithm with <σ to π will result in σ.

The compression algorithm is surprisingly simple: record the results of <σ as you do the sorting algorithm. The result is simply a bitstring. Let’s see an example with our concrete values of π = (1 2 3) and σ = (1 3 2) and using insertion sort:

  • Insertion sort looks at 2 and 1. Compare <σ(2, 1) = 0 since 2 is not before 1 in σ. Insertion sort concludes these two elements are in the correct order.
  • Insertion sort looks at 3 and 2. Compare <σ(3, 2) = 1 since 3 is before 2 in σ. Insertion sort swaps these two elements, now π = (1 3 2).
  • Insertion sort looks at 3 and 1. Compare <σ(3, 1) = 0 since 3 is not before 1 in σ.
  • Insertion sort looks at 2 and 3. Compare <σ(2, 3) = 0 since 2 is not before 3 in σ.

If we record the results of <σ, we get C(π, σ) = [0, 1, 0, 0]. Now our decompression algorithm can re-run the sorting algorithm by while using the results of the compression algorithm instead of σ itself.

The efficiency of this method depends on the exact sorting algorithm used, and adaptive sorting algorithms like insertion sort can perform particularly well in some cases. Insertion sort runs in time Θ(n + k) where k is the number of inversions. Since the runtime of inversion sort is Θ(n + k), so is the number of comparisons performed, and so the output can be significantly smaller than Θ(n log  n) for some cases!

When we said “roughly equal” in the introduction, the definition is essentially arbitrary – different adaptive sorting algorithms will provide different results for different data sets. There’s no compression algorithm which is best for all use cases, just like there’s no generally best sorting algorithm.

Real-world implementation

We can also run an ordinary compression algorithm, such as zlib or run-length encoding, on the result of C. These often seem to help in practice. Here are some real-world results on a benchmark with n = 3255, expressed in terms of “number of bits required for compression output divided by n”:

method efficiency
theoretical lower bound 10.2 bits/n
merge sort + zlib 1.16 bits/n
binary insertion sort + zlib 1.29 bits/n
shellsort + zlib 2.78 bits/n
timsort + zlib 1.14 bits/n

The best result is around nine times lower than the theoretical lower bound!

Here’s some Python code implementing the core algorithms discussed:

from functools import total_ordering

def make_logging_cmp(cmp):
    log = []
    def logging_cmp(a, b):
        result = cmp(a, b)
        return result
    return logging_cmp, log

def make_cmp_from_permutation(sigma):
    sigma_inv = {v: k for k, v in enumerate(sigma)}
    return lambda a, b: sigma_inv[a] < sigma_inv[b]

def make_cmp_from_log(log):
    i = 0
    def cmp(_a, _b):
        nonlocal i
        result = log[i]
        i += 1
        return result
    return cmp

def timsort(cmp, pi):
    class Number:
        def __init__(self, i):
            self.i = i
        def __lt__(self, other):
            return cmp(self.i, other.i)
    return sorted([Number(x) for x in pi])

sigma = [1, 3, 2]
pi = [1, 2, 3]
cmp = make_cmp_from_permutation(sigma)
logging_cmp, log = make_logging_cmp(cmp)
timsort(logging_cmp, pi)
print('compression result', log)
# outputs [False, True, True, False]

decompression_cmp = make_cmp_from_log(log)
sigma2 = [x.i for x in timsort(decompression_cmp, pi)]
print('decompression works', sigma2 == sigma)
# outputs True
Posted on 2022-03-25