Why not to use cofactor expansion
Here is a list of all the pessimistic reasons for wanting to teach
cofactor expansion for computing determinants. These reasons are framed
from the perspective of someone with cynical motivations, such as
wanting to make exams more difficult, induce errors, or maintain
strict control over the course average.
- 1. Exam difficulty and time constraints
- Perfect for "time sinks" on exams
- Why it works
- Cofactor expansion is slow and tedious, especially for
$4 \times 4$ or larger matrices. It requires students to compute
multiple smaller determinants recursively, each of which takes
more time. A single cofactor expansion question can devour 20+
minutes of an exam.
- Motive
- It eats up exam time, forcing students to rush through the rest of
the test. The problem appears "easy" at first glance, but the recursive
nature leads to an exponential explosion of calculations.
- Example
- "Calculate the determinant of this $5 \times 5$ matrix using cofactor
expansion"—it will take forever and students will waste valuable exam time.
- 2. Tedious and error-prone process
- Guaranteed source of student mistakes
- Why it works
- Cofactor expansion requires careful attention to signs $(-1)^{i + j}$,
submatrices, and recursive determinants, making it easy to mess up a single sign, forget
a term, or miscalculate a minor determinant.
- Motive
- This is a "gotcha" method for penalizing students. If they forget even one negative
sign or make a small arithmetic mistake, the entire determinant is wrong. This gives
an easy opportunity for partial marks or flat deductions.
- Example
- "Use cofactor expansion on the second row to compute $\textrm{det}(A)$."—the
student needs to keep track of signs for every element, and small slip-ups can result
in entirely wrong answers.
- Sign rule confusion
- Why it works
- Students have to keep track of the signs $(-1)^{i + j}$.
for every term in the expansion. Since sign mistakes are easy to make,
this guarantees confusion and frustration.
- Motive
- It's a built-in method to "trip up" even good students. The seemingly
simple logic of the sign flip becomes more confusing as the matrix size increases.
Students might miscount, lose track of which row or column they’re on, or
incorrectly apply the rule.
- Example
- "Use cofactor expansion along the third row." Half the students will get
the sign wrong because they miscount $i$ and $j$, as if this has any
impact on students being competent engineers.
- Recursive nature is chaos for students
- Why it works
- Each cofactor expansion on an $n \times n$ matrix
requires students to compute $(n−1)^2$ determinants, which themselves
require expansion. Students can easily lose track of which minor they
are calculating and for which row or column.
- Motive
- It generates cognitive overload, forcing students to juggle multiple
small determinants in their heads. Mistakes compound as they go deeper
into the recursive tree of calculations.
- Example
- "Calculate $\textrm{det}(A)$ for this $4 \times 4$ matrix
using cofactor expansion." By the time they are working with multiple
$3 \times 3$ determinants, students forget where they started.
- 3. Increases exam grading power
- Partial marks galore
- Why it works
- Since cofactor expansion involves many intermediate steps (calculating
minors, tracking signs, multiplying, and summing), there are multiple
places where instructors can award "partial marks."
- Motive
- It's a way to create more granular grading. If a student does 90%
of the problem correctly but forgets a sign, you can justify giving
them only 70% of the marks. This ensures students are incentivized
to work perfectly, but also allows graders to be "tough but fair."
- Example
- Award 3/10 marks if the student gets the minors right but forgets
the sign. Award 1/10 marks if the student sets up the cofactor formula
but makes a mistake early on.
- Highly subjective grading
- Why it works
- The multi-step nature of cofactor expansion means different students
might approach it slightly differently (choosing different rows/columns
to expand), but mistakes at intermediate steps can result in wildly
different paths.
- Motive
- This gives the grader absolute power over how partial marks are
awarded. Two students may do 90% of the work correctly, but if one
makes an arithmetic mistake and the other forgets a sign, they might
get different grades.
- Example
- If a student chooses the "hardest row" to expand, the instructor
can justify giving them fewer marks than another student who chose
the "easier row" (even if it wasn’t explicitly stated in the question).
- 4. Illusion of complexity is equated to perception of rigour
- Makes the course look harder
- Why it works
- Cofactor expansion looks sophisticated. It involves recursive
definitions, submatrices, and abstract mathematical notation, which
gives the impression of mathematical "depth."
- Motive
- It's a way to make the course appear more "rigorous" than it
actually is. From the student's perspective, the complexity seems
high, even though it's just mechanical recursion.
- Example
- Students see $\textrm{det}(A) = \sum_{k = 1}^n (-1)^{i + j}A_{i,j}$
and feel overwhelmed. This makes the course feel "deep" and "difficult."
- Builds the illusion of competence
- Why it works
- Memorizing the steps of cofactor expansion makes students feel like
they are doing "serious math," even though they aren't learning
efficient computational techniques.
- Motive
- It promotes the illusion that students are developing "mathematical
maturity," but they aren't learning the modern, efficient algorithms
(like LU decomposition) used in practice. This ensures students remain
dependent on the instructor's guidance.
- Example
- Instead of teaching computationally efficient methods like LU
decomposition, the instructor emphasizes cofactor expansion,
giving students false confidence in a "mathy-looking" process.
- 5. Tradition and resistance to change
- "This is how I learned it" defence
- Why it works
- Many instructors themselves were taught cofactor expansion as
the "default" way to compute determinants. There’s a legacy of passing
down this outdated method as a "rite of passage."
- Motive
- It maintains a cycle of suffering. Students have to endure the
same unnecessary effort their instructors went through. There's
an element of "If I had to do it, so should you."
- Example
- "This is how it's always been done."
- Avoids teaching more advanced (and useful) methods
- Why it works
- LU decomposition is more efficient but harder to explain
conceptually. Cofactor expansion, despite being inefficient,
is easier to introduce since it's recursive and "obvious" for
small $2 \times 2$ and $3 \times 3$ matrices.
- Motive
- It's easier to avoid teaching modern, practical methods
like LU decomposition because they require more explanation
(pivoting, row swaps, etc.). Cofactor expansion fits neatly
into a simple, clean recursive definition.
- Example
- Avoid teaching computationally efficient methods. Stick to
"clean" cofactor expansion because it’s "simpler to explain,"
even if students never use it in practice.
For a summary of the reasons:
Category | Reason |
Exam strategy | Time sink, tedious, source of mistakes |
Error induction | Sign mistakes, recursive chaos, loss of place |
Grading power | Partial marks, subjective grading power |
Course perception | Makes course "seem" harder, builds illusion of rigour |
Instructor comfort | "I had to do it, so do you," avoid teaching LU |
Cofactor expansion is inefficient, error-prone, and obsolete for large
matrices. Yet, if the instructor's goal is to increase student suffering,
maintain control of the course average, and create a perception of "rigour,"
it is the perfect tool. It has the "illusion of depth," generates errors,
and is perfectly suited for "time-sink" exam questions.
In reality, LU decomposition is far more efficient, stable, and
conceptually valuable, but it requires more up-front explanation,
making it less "convenient" for instructors trying to create simple
exam questions.
Why teaching cofactor expansion might be useful
Cofactor expansion provides students with a conceptual foundation for
understanding determinants, recursion, and the decomposition of matrices.
It introduces key ideas like breaking larger problems into smaller
sub-problems, which directly relates to recursion in computer science
and divide-and-conquer algorithms. It also allows students to see how
determinants relate to geometric concepts like area, volume, and
orientation. The cofactor expansion process reinforces attention to
detail, as students must track signs, submatrices, and arithmetic
carefully—all valuable skills for engineering, computer science,
and mathematics. Moreover, cofactor expansion serves as a natural entry
point for inductive proofs and theoretical results in linear algebra,
as many key determinant properties (e.g., row swaps flipping signs,
$\textrm{det}(AB) = \textrm{det}(A) \textrm{det}(B)$ are easiest to
prove using this method. Finally, cofactor expansion is historically
significant, reflecting how early mathematicians approached
determinant computation before modern computational techniques
like LU decomposition were developed.
Why these justifications don't hold up
Despite its conceptual value, teaching cofactor expansion as a
computational method is unjustified. The recursive definition is
elegant, but for matrices larger than $3 \times 3$, it is
exponentially inefficient, with a time complexity of $O(n!)$.
compared to the $O(n^3)$ of modern algorithms like LU decomposition.
Teaching it as a "practical method" misleads students into thinking
it's useful for computation, even though no one in practice would
compute a $10 \times 10$ determinant using cofactor expansion. The
argument that it "trains attention to detail" is weak because many
better tasks (like debugging code or conducting row-reduction) also
train attention while offering real-world utility. While it provides
an intuitive introduction to recursion, the time spent on cofactor
expansion could be better used teaching recursion in contexts where
recursion is actually applied (like depth-first search or
divide-and-conquer algorithms). Historical significance is also a
shallow justification—we don't teach outdated methods like
Newton's fluxions in calculus classes. Ultimately, students gain
far more from learning computationally efficient methods (like
LU decomposition) or conceptually rich approaches (like eigenvalues
and linear transformations) than from suffering through a laborious,
error-prone, and obsolete method like cofactor expansion.