Off on a tangent from some of the comments: It is an interesting research question to understand which algebraic structures (over which convolution is defined) admit fast algorithms and which do not.

Consider two d dim vectors x and y; denote their (⊕,⊗) convolution by z. 1/n https://twitter.com/gabrielpeyre/status/1303921143150239749
⊕ is the addition operation and ⊗ the multiplication operation of the semi-ring considered.

It is easy to see that for any (⊗,⊕), convolution can be done trivially in O(n^2) operations.

For the familiar case ⊕ = + and ⊗ = x, it can be done efficiently in O(n log n) 2/n
This is due to the FFT.

However, if ⊗ = min and ⊕ = + i.e. the (min,+) convolution, then no algorithms are known that can operate in fewer than O(n^{2-ε}) for ε >0 https://arxiv.org/abs/1702.07669  3/n
If however one of the functions is convex or concave then it can be done in O(n log n) http://cs.brown.edu/people/pfelzens/papers/dt-final.pdf

But if ⊗ = min and ⊕ = max, then the convolution can somehow be done much faster; being reducible to √d instances of the usual (x,+) convolution. 4/n
Note: I mixed up d and n in the above.

Which structures admit faster algorithms? why? Does anyone know of any attempts at an unified treatment? Also note that fixing (x,+), changing the domain of the functions to be on groups, is also an active area of research for FFT design.
But I think in that setting, for compact groups etc, it can always be done.
You can follow @_onionesque.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.