# Computer Science: Theory and Application

## r/compsci

*pinned by moderators*

First of all, sorry if this was already posted, I couldn't find any reference on this subreddit.

Looks like the long awaited TAOCP 4B, or more formally, "Combinatorial Algorithms, Part 2" will be available on 11 October, i.e. ten days from now! I'm quite excited, this will be one of the most interesting books in the series.

See Knuth's page on the books for reference.

https://preview.redd.it/nmr43ysju5r91.png?width=1053&format=png&auto=webp&s=74463a2b37141b77656ed78c70b8a4defd2079deDear Friends,

I would like to share with you this visualization for the formula

1^2+...+n^2=n(n+1)(2n+1)/6. I hope you will enjoy seeing the visual proof manifesting itself as an assembly of a 3D puzzle.

This video was prepared using the manim library.

Enjoy:

https://www.youtube.com/watch?v=NZaEQFn1LGY&ab_channel=Math%2CPhysics%2CEngineering

I've been doing research into maximum cardinality matching for a project. Most sources talk about the Hopcroft-Karp algorithm. Some of them mention that the problem can be turned into a network flow problem and use the Ford–Fulkerson algorithm.

But both of these algorithms are old and slow. There's been 50 more years of research on the topic since these algorithms were published, and many faster algorithms for solving network flow have been found. Just a few months ago, a near-linear time algorithm was discovered.

So my question is, why is Hopcroft-Karp still recommended by so many places (geeksforgeeks, stack overflow, wikipedia, random programming blogs)? Is it because the faster algorithms are more complex? Or do the newer algorithms only work in special cases?