Mar 11, 2002 ------------ - review of linear algebra - geometry view - vector view - geometry view - linear equations in 2D - linear equations in 3D - vector view - linear combination of vectors - linear dependence and linear independence - how singularity, solutions etc. manifest in different views - no solutions - one solution - infinite solutions - elimination method of solving simultaneous equations Ax=b - why Cramer's rule is bad - why it is not necessary to explicitly solve for inverses of matrices - need for pivoting Mar 13, 2002 ------------ - Gaussian elimination method - formally speaking - Pivoting and its importance - ways of flipping Mar 15, 2002 ------------ - Revisiting Gaussian elimination - our approach to pivoting (like the book, we determine the ratios but unlike the book, we revisit the divisors for the ratios at each stage) - Operations count for GE - for rendering in upper triangular form => n^3/3 - for solving for unknowns => n^2 Mar 18, 2002 ------------ - LU Factorization - L: unit lower triangular, U: upper triangular - what do we gain from it? - Ax = b decomposes into - Ly = b - Ux = y - Advantages of the decompositional approach - allows us to solve a whole family of problems (not just one) - Finding U is trivial but where is the L in GE? - obtained from negatives of the multipliers - also need to invert a lower triangular matrix - LU breaks down for some matrices - some matrices do not have an LU decomposition! - these are the cases where pivoting is required for GE - Capturing pivoting in an LU decomposition - Instead of A=LU, do PA=LU - Matrix decompositions in real life - SVD: Singular Value Decomposition - uses in search engines (Google, CLEVER, etc.) Mar 20, 2002 ------------ - How to find A^(-1) using a GE code - you will need this for your Assignment #7 - Recap iterative ways of solving equations - when are they prefered over "direct" approaches? - Can extend simple unknowns to a vector of unknowns - express equations as x = something involving x - x was a single variable in Chapter 3 (Newtons/Secants/Functional Iteration) - here, x is a vector - Different ways of making Ax=b look like x=something - Gauss-Jacobi - Gauss-Seidel - Mathematically, these two are derived from the same general scheme - write A in Ax=b as A=M-N => x = M^(-1)Nx + M^(-1)b - different choices of M and N give you Gauss-Jacobi and Gauss-Seidel! - Continued for next class! Mar 22, 2002 ------------ - Details of iterative methods - Write down A = D - L - U - In Gauss-Jacobi - M = D - N = L + U - In Gauss-Seidel - M = D - L - N = U - Worked out simple examples - Convergence criteria for iterative methods - when to stop - choice of initial vector - Consider a new iterative scheme: SOR - M = 1/w (D-wL) - What is N? - What is this scheme doing? - When w=1, SOR reduces to Gauss-Seidel Mar 25, 2002 ------------ - Details of iterative methods - convincing ourselves of what Gauss-Jacobi and Gauss-Seidel are doing - What is SOR doing? - a "mix" between old values and new values computed using Gauss-Seidel - what are good values for w? Mar 27, 2002 ------------ - Convergence properties of iterative methods - if spectral radius of M^(-1)N is less than 1, then the iterative method will converge - difficult to check for - Alternative condition for Jacobi and Gauss-Seidel - if matrix A is diagonally dominant, J and GS will converge (this is sufficient but not necessary) - Alternative condition for SOR - if matrix A is symmetric, positive definite, SOR will converge for 0