Today, I just learned from CZ4024 Cryptography and Network Security class about how should we calculate matrix determinant. I remember from high school that there are at least 2 ways that we can do it, using Laplace’s formula (a.k.a. cofactor method) and Gaussian elimination (a.k.a. row reduction).

In the Laplace’s method, we can see really quickly that to calculate the determinant of an square matrix of dimension n, we need to calculate determinants of n square matrices of dimension n-1.
Thus in terms of time complexity, it goes to `O(n!)`

! (no wonder it was so tedious to do it back in high-school).

Contrasting it with the Gaussian elimination, what we need to do is to bring all the values at the lower triangle (hence at most `O(n^2 / 2)`

values) to zero. We can do it by “zero-ing” all the values in the first column first, then second column, etc..
In each operation, we will replace the value of a row with the sum of the original row with a scaled version of another row. And that takes `O(n)`

time. That means, in total, we will only need to do `O((n^2 / 2) * n) = O(n^3)`

time.

Surely there are a lot of other methods to calculate determinant and probably `O(n^3)`

still far from the fastest. But what I think is important to note here is that how math can make non-trivial optimization on the computation.

(images sourced from Wikipedia)