Loading content...
Solving systems of linear equations is one of the most fundamental and ubiquitous operations in scientific computing, engineering, and machine learning. The Row Reduction Method (also known as Gaussian Elimination) is a classical direct algorithm that systematically transforms a system of linear equations into an equivalent upper triangular form, from which the solution can be easily computed via backward substitution.
Given a system of n linear equations with n unknowns represented in matrix form as Ax = b, where:
The row reduction algorithm works in two phases:
Phase 1: Forward Elimination (Reduction to Upper Triangular Form) Transform the augmented matrix [A|b] into an upper triangular system by eliminating variables below the diagonal. For each pivot position, the algorithm:
Phase 2: Backward Substitution Starting from the last equation (which has only one unknown), solve for each variable:
$$x_i = \frac{b_i' - \sum_{j=i+1}^{n} a_{ij}' x_j}{a_{ii}'}$$
where the primed values represent the transformed system after forward elimination.
Partial Pivoting: To ensure numerical stability and avoid division by very small numbers, the algorithm uses partial pivoting: before eliminating entries in column k, it searches for the row with the largest absolute value in that column (at or below row k) and swaps it with the current row. This minimizes rounding errors and prevents division by zero.
Your Task: Implement a Python function that solves a system of linear equations using row reduction with partial pivoting. The function should accept a coefficient matrix A and a constant vector b, and return the solution vector x.
A = [[2.0, 8.0, 4.0], [2.0, 5.0, 1.0], [4.0, 10.0, -1.0]]
b = [2.0, 5.0, 1.0][11.0, -4.0, 3.0]We need to solve the system:
• 2x₁ + 8x₂ + 4x₃ = 2
• 2x₁ + 5x₂ + 1x₃ = 5
• 4x₁ + 10x₂ - 1x₃ = 1
Step 1: Form the augmented matrix [A|b]: [ 2 8 4 | 2 ] [ 2 5 1 | 5 ] [ 4 10 -1 | 1 ]
Step 2: Partial pivoting - Row 3 has largest first element (4), swap R1 ↔ R3: [ 4 10 -1 | 1 ] [ 2 5 1 | 5 ] [ 2 8 4 | 2 ]
Step 3: Eliminate below first pivot: R2 = R2 - (2/4)R1, R3 = R3 - (2/4)R1 [ 4 10 -1 | 1 ] [ 0 0 1.5 | 4.5 ] [ 0 3 4.5 | 1.5 ]
Step 4: Continue elimination and apply backward substitution...
The final solution is x = [11.0, -4.0, 3.0], which can be verified: • 2(11) + 8(-4) + 4(3) = 22 - 32 + 12 = 2 ✓ • 2(11) + 5(-4) + 1(3) = 22 - 20 + 3 = 5 ✓ • 4(11) + 10(-4) - 1(3) = 44 - 40 - 3 = 1 ✓
A = [[2.0, 1.0], [1.0, 3.0]]
b = [5.0, 10.0][1.0, 3.0]We need to solve the 2×2 system: • 2x₁ + 1x₂ = 5 • 1x₁ + 3x₂ = 10
Step 1: Form the augmented matrix: [ 2 1 | 5 ] [ 1 3 | 10 ]
Step 2: First column already has largest element (2) at top, no swap needed.
Step 3: Eliminate below pivot: R2 = R2 - (1/2)R1 [ 2 1 | 5 ] [ 0 2.5 | 7.5 ]
Step 4: Backward substitution: From row 2: 2.5x₂ = 7.5 → x₂ = 3.0 From row 1: 2x₁ + 1(3) = 5 → x₁ = 1.0
The solution is x = [1.0, 3.0]. Verification: 2(1) + 1(3) = 5 ✓, and 1(1) + 3(3) = 10 ✓
A = [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
b = [3.0, 7.0, 2.0][3.0, 7.0, 2.0]This is a special case where A is the identity matrix I₃×₃.
For the identity matrix, the system Ix = b simplifies directly to x = b.
The matrix is already in upper triangular form (in fact, diagonal form), so no elimination is needed. Each variable's value equals the corresponding constant: • 1x₁ + 0x₂ + 0x₃ = 3 → x₁ = 3 • 0x₁ + 1x₂ + 0x₃ = 7 → x₂ = 7 • 0x₁ + 0x₂ + 1x₃ = 2 → x₃ = 2
The solution x = [3.0, 7.0, 2.0] is simply the right-hand side vector b.
Constraints