Skip to content

Alessandro Chiesa: Sampling matchings

Having a cryptographic bent means I have more reductions in mind than algorithms, BUT:

When I took Costis’ course a year ago, I really liked the Jerrum – Sinclair Markov chain for sampling matchings; it was really the highlight of the course for me.  To me the surprising part was the ability to sample quite structured and different objects (namely, different matchings of a graph) with only very local operations in a way that mixes fast.  Also the exact design and analysis of the Markov chain are clever. Especially the analysis, which uses the technique of “flow encodings” that lets you reason about the mixing time of a Markov chain by studying certain very specific flows on the state graph.


Henry Yuen: Constructive Lovasz Local Lemma

See this pdf :hyuen_favorite_algorithm

Travis Hance: Suffix Arrays in Linear Time

I think splay trees are really surprising, but I’m not sure if that “counts”, since it’s more of a data structure than an algorithm.

After splay trees, I think the next most surprising algorithm is the O(n) construction of a suffix array of a string. (The suffix array is an ordering of the suffixes of a string over a fixed alphabet in lexicographic order). It’s surprising to me that the suffix array can be computed that fast, plus I think the algorithm is kind of cool. There are a few ways to do it, I think, but I’ll briefly describe the one I learned. The idea is that to find the suffix array of the string x_1,…,x_n, you first recursively compute the suffix arrays of the (2n/3)-length string

<x_1, x_2, x_3>, <x_4, x_5, x_6>, …, <x_{n-2}, x_{n-1}, x_n>, $, <x_2, x_3, x_4>, <x_5, x_6, x_7>, …, <x_{n-1}, x_n, $>

After recursively computing the suffix array of this, we now have sorted the suffixes which start at position i where i = 1 or 2 mod 3. Again by radix sorting, you can then sort suffixes which start at positions 0 mod 3. Now, we have two “partial” suffix arrays, and we just need to merge these two arrays together. To compare a suffix which starts at a 0 mod 3 position with a suffix starting at a 1 mod 3 position, we can we compare the first characters, and if they are equal, we end up needing to compare some suffix starting at a 1 mod 3 position and a suffix starting at a 2 mod 3 position, which we can already do. Similarly, we can compare a 0 mod 3 suffix with a 2 mod 3 suffix, allowing all suffixes between our two partial suffix arrays to be compared, and so they can merged.

Again, the reason I find this surprising is simply that when I first heard about the problem, it was surprising to me that it could be done in linear time.

Eric Miles: Goldreich-Levin

I think the algorithm I was most surprised by when I first saw it is the Goldreich-Levin algorithm.

The problem is, given an oracle B() such that B(x) = <x,r> with probability 1/2 + eps, for some unknown r, use B to find r with probability poly(eps). The idea is to choose x1,…,xm uniformly at random for m = O(log 1/eps), and then “guess” bits b1,…,bm such that <xi,r> = bi; these guesses will all be correct with probability poly(eps). Assuming they are correct, we then get poly(1/eps) pairwise independent vectors, for which we know the inner product with r, by taking xi+xj and bi+bj for all i \neq j. Finally, we can compute the ith bit of r by taking the majority vote of b’ + B(x’ + ei), where x’,b’ range over the pairwise independent elements and ei is the bit vector with the only 1 in the ith position. The majority vote is correct whp by Chebyshev because of the pairwise independence and the success probability of B.
I found this algorithm surprising mainly for two reasons. One is simply that pairwise independence is much more powerful than it at first appears. The second is that this way of generating an exponential number of vectors and inner product bits that are “random enough”, but still correct assuming the original guesses were correct, is very clever, and it was the first time I had seen this trick.

Algebra and Computation: Welcome

Welcome to Algebra and Computation, the blog partner of Algebra and Computation, the website for the course of the same name at MIT.

Hopefully we’ll blog about things we care about there, here.