Loading content...
Solving systems of linear equations is a cornerstone of numerical computing, underpinning countless applications from engineering simulations to machine learning optimization. While direct methods like Gaussian elimination provide exact solutions, they can be computationally expensive for very large systems. Iterative methods offer an elegant alternative, progressively refining an initial guess until convergence.
One of the most intuitive iterative approaches involves diagonal decomposition of the coefficient matrix. Consider a system of linear equations represented as Ax = b, where A is a square coefficient matrix, x is the unknown solution vector, and b is the constants vector.
The core insight is to decompose matrix A into three parts:
So we have: A = D + L + U
Rearranging the original equation, we can express each variable in terms of the others:
$$x_i^{(k+1)} = \frac{1}{a_{ii}} \left( b_i - \sum_{j eq i} a_{ij} x_j^{(k)} \right)$$
Where:
The Algorithm:
Your Task: Write a Python function that implements this iterative approach to solve a system of linear equations. Given the coefficient matrix A, constants vector b, and number of iterations n, return the approximate solution vector with each element rounded to four decimal places.
Note: This method works well when the matrix is diagonally dominant (each diagonal element is larger than the sum of other elements in that row), which ensures convergence.
A = [[5, -2, 3], [-3, 9, 1], [2, -1, -7]]
b = [-1, 2, 3]
n = 2[0.146, 0.2032, -0.5175]Starting with x⁽⁰⁾ = [0, 0, 0], we perform 2 iterations:
Iteration 1: • x₁⁽¹⁾ = (1/5) × (-1 - (-2)(0) - (3)(0)) = -1/5 = -0.2 • x₂⁽¹⁾ = (1/9) × (2 - (-3)(0) - (1)(0)) = 2/9 ≈ 0.2222 • x₃⁽¹⁾ = (1/-7) × (3 - (2)(0) - (-1)(0)) = -3/7 ≈ -0.4286
After rounding: x⁽¹⁾ = [-0.2, 0.2222, -0.4286]
Iteration 2: • x₁⁽²⁾ = (1/5) × (-1 - (-2)(0.2222) - (3)(-0.4286)) = (1/5) × (-1 + 0.4444 + 1.2858) = 0.146 • x₂⁽²⁾ = (1/9) × (2 - (-3)(-0.2) - (1)(-0.4286)) = (1/9) × (2 - 0.6 + 0.4286) ≈ 0.2032 • x₃⁽²⁾ = (1/-7) × (3 - (2)(-0.2) - (-1)(0.2222)) = (1/-7) × (3 + 0.4 + 0.2222) ≈ -0.5175
Final result: [0.146, 0.2032, -0.5175]
A = [[4, 1], [1, 3]]
b = [1, 2]
n = 3[0.1042, 0.6389]This 2×2 system demonstrates convergence over 3 iterations:
Iteration 1: x⁽⁰⁾ = [0, 0] • x₁⁽¹⁾ = (1/4) × (1 - 1×0) = 0.25 • x₂⁽¹⁾ = (1/3) × (2 - 1×0) = 0.6667
Iteration 2: x⁽¹⁾ = [0.25, 0.6667] • x₁⁽²⁾ = (1/4) × (1 - 1×0.6667) = 0.0833 • x₂⁽²⁾ = (1/3) × (2 - 1×0.25) = 0.5833
Iteration 3: x⁽²⁾ = [0.0833, 0.5833] • x₁⁽³⁾ = (1/4) × (1 - 1×0.5833) = 0.1042 • x₂⁽³⁾ = (1/3) × (2 - 1×0.0833) = 0.6389
Final result: [0.1042, 0.6389]
A = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
b = [5, 10, 15]
n = 1[5.0, 10.0, 15.0]When the coefficient matrix is the identity matrix, the system simplifies to x = b.
Since there are no off-diagonal elements, each equation becomes: • x₁ = 5/1 = 5.0 • x₂ = 10/1 = 10.0 • x₃ = 15/1 = 15.0
The solution is obtained exactly in just one iteration: [5.0, 10.0, 15.0]
This demonstrates that the identity matrix is the trivial case where the solution equals the constants vector.
Constraints