objectives
Post on 23-Feb-2016
37 Views
Preview:
DESCRIPTION
TRANSCRIPT
Objectives• Linear Equations• Solving Linear Equations
• Point-Jacobi Methods• L-U Decomposition/Gaussian Elimination
• Non-Linear Equations• Solving Non-linear Equations in 1D
• Interval Bisection Method• Newton’s Method
Solving Systems of NLE’s• Newton’s Method
Good Reference: Heath, Scientific Computing 2005 McGraw Hill
Linear Equations• Effects directly proportional to causes
Examples:f(x)=a*x + b
F([x])=[A]x+[b]
F(x)== mxn nx1 nx1
Linear Equations• Examples in building energy modeling:
• Steady state conduction in multi-component wall system
Faca
de s
lab
Insu
latio
n
Gyp
sum
[Q(T)]=[{k}A/L[M]] {T}
Where[M]=
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• If we start with an initial guess for the solution, we can see how close this guess is, which will inform our next guess, and so on until we arrive at the solution
• In fixed-point iteration schemes, we put equation in form
x=f(x) and use successive guesses for x(RHS) to get our next guess (LHS)
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• Illustration of Jacobi Iteration for a scalar:• Solve 3x=6 by iterating
– We need a term “x” by itself on one side
– Break x term into just x and what remainsi.e. x + 2x =6
– Rearrange to givex=-x/2+3
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• Now that we have the equation in the proper form, make an initial guess for x
• xo=…. Say 7
• Plug into right side only to get x=-7/2+3=-0.5
• -0.5 is now our second guess, x1
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• If we continue in this manner, we approach the solution of x=2 Trial xi xi+1
0 7 -0.51 -0.5 3.252 3.25 1.3753 1.375 2.31254 2.3125 1.843755 1.84375 2.0781256 2.078125 1.9609387 1.960938 2.0195318 2.019531 1.9902349 1.990234 2.004883
10 2.004883 1.997559
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• We can extend this to systems of linear equations
• Given [A]{x}={b}• “Split” matrix [A] into [D] and [C]
where [D] is the diagonal elements of [A] and [C] is [A]-[D]
• What results isDx=-Cx+b or x=-D-1Cx + D-1b
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
x=-D-1Cx + D-1b• We now have x isolated on one side• Do the same process as for the scalar
equation:– Initial guess for {x}, {x0}– Plug this into right side only– Resulting value of left side becomes next guess,
{x1}, and so on until convergence
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• Convergence– What if we wrote original scalar equation as
x=-2x+6 and did the same type of iteration?Trial xi xi+1
1 7 -82 -8 223 22 -384 -38 825 82 -1586 -158 3227 322 -6388 -638 12829 1282 -2558
10 -2558 512211 5122 -10238
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• Depending on original formulation of the iteration equation, iteration may or may not converge
• In general, given ax=bWe convert to (a-C)x+Cx=bThen our iteration scheme is :
which results in:
Solving Linear Equations: Fixed Point Iteration/ Jacobi Iteration
• Under what conditions will this converge as k→∞?????
<1
• Similarly for systems of eqns., all eigenvalues and the matrix norm must be <1
Solving Linear Equations: Gaussian Elimination
• Single equation easy enough– f(x)=0=a*x + b– x=-b/a
• Can extend this to system of equations if we can transform the matrix so that each line has only one variable
• Doesn’t require iteration. It is direct calculation.
• Used in MatLab and other software to do matrix division
Solving LE’s: Gaussian Elimination• Gaussian Elimination, L-U Factorization, L-U
Decomposition
Given Ax=b, where A is mxn matrix
• Transform into something of the form:
• {}={C}
(“lower triangular” matrix- what if it is “upper triangular”?)
Solving LE’s: Gaussian Elimination• How do we transform our original equation?• Multiplying both sides of any matrix equation by any matrix
[M] does not change equation if M is non-singular• Therefore we can do
[M1] [M2]….. [Mn] [A] {x}=[M1] [M2]…[Mn]{b}until we arrive at
[N]{x}= [P] {b}
Where [N]= [M1] [M2]….. [Mn] [A]
and is lower triangular or upper triangular
Solving LE’s: Gaussian Elimination• How do we transform A into N????• Start with simple example:
Ax=b; A is 2 x 1 matrix , B is
If we pre-multiply both sides by matrix M =
we get
Nx=MAx= x=Mb
Can we now solve for all values of x?
Solving LE’s: Gaussian Elimination
• We transformed a 2 x 1 Matrix into an upper diagonal matrix, N.
• We can do this for any size non-singular matrix by using the following transformation
Solving LE’s: Gaussian Elimination• Given Ax=b, with constituents of A being aij
for k=1 to n-1if akk0for i = k+1 to nmik=aik/akk
endfor j=k+1 to nfor i= k+1 to naij = aij =mikakj
endend
end
Loop over columnsAvoid dividing by 0
Divides each entry below the diagonal by the diagonal entry for that column
Transforms each member of lower part of matrix
What results is a transformed version of A which is upper diagonal
Solving LE’s: Gaussian Elimination
• With the process on the previous slide, we can transform any non-singular matrix, with a few stipulations
• Much more information in handouts on process, derivation, etc.
• One example problem now
Solving LE’s: Gaussian Elimination
Given Ax=
Solve for x
Solving LE’s: Gaussian Elimination
Premultiply first by M1=And then by M2=
Get M2M1Ax =x=M2M1b=
Solving LE’s: Gaussian Elimination
Easily solved by successive substitution to give:x=
Non-Linear Equations• Arise often in building energy modeling
applications• e.g Calculate radiation heat transfer from roof
when sky temperature is known
• Q(Troof)= Fes(Tsky4 - Troof
4) Aroof
Tsky (known)
Troof (quantity of interest)
Solving Non-Linear Equations• Analytical solutions rarely possible• Often large systems of equations• Must use numerical techniques• Will introduce two:
• Interval Bisection (single equation)• Newton’s Method (single/system)
Solving NLE’s: Interval Bisection• All NLE’s can be written as homogeneous
equations, i.ef(x) =0
• Therefore, all solutions of NLE’s are “zero-finding” exercises
• If we know an approximate interval where f(x) crosses x-axis, we can make interval smaller and smaller until we have a tiny interval in which solution lies
Solving NLE’s: Interval Bisection
x
f(x)x=a x=b
Solving NLE’s: Interval Bisection
x
f(x)x=a x=b
x=(a+b)/2
Solving NLE’s: Interval Bisection
x
f(x)x=a x=b
x=(a+b)/2
Is zero between x=a and x=(a+b)/2 or x=(a+b)/2 and x=b?
Repeat process again until interval is smaller than a certain tolerance
Solving NLE’s: Interval Bisection
• Example: Find solution of f(x)=(x-2)2-4– We know the answer is between, say, -3 and 1.1
• Solution: a b m f(a) f(b) f(m) tol-3.000 1.100 -0.950 21.000 -3.190 4.703 4.100-0.950 1.100 0.075 4.703 -3.190 -0.294 2.050-0.950 0.075 -0.438 4.703 -0.294 1.941 1.025-0.438 0.075 -0.181 1.941 -0.294 0.758 0.513-0.181 0.075 -0.053 0.758 -0.294 0.215 0.256-0.053 0.075 0.011 0.215 -0.294 -0.044 0.128-0.053 0.011 -0.021 0.215 -0.044 0.085 0.064-0.021 0.011 -0.005 0.085 -0.044 0.020 0.032-0.005 0.011 0.003 0.020 -0.044 -0.012 0.016-0.005 0.003 -0.001 0.020 -0.012 0.004 0.008
Solving NLE’s: Newton’s Method• With interval bisection, we only looked at the
sign of the function at a given point• If we also look at the derivative of the
function, we can find a solution faster• Newton’s Method uses a Taylor Series
expansion:
f(x+h) =f(x) +f’(x)h + higher order terms
Solving NLE’s: Newton’s Method• If we drop higher order terms we get
f(x+h) ≈f(x) +f’(x)h• If we start at some value of x (initial guess), we
want to find a value of h for which f(x+h) is as close to 0 as possible.
• This occurs ath=-f(x)/f’(x)
• We then evaluate the function and its derivative at (x+h) and start the process again
Solving NLE’s: Newton’s Method• Mathematically:
k=0x0=initial guesswhile f(xk)>tolerance
xk+1=xk-f(xk)/f’(xk)k=k+1
end
Solving NLE’s: Newton’s Method• Graphically:
x0
x
f(x)
Solving NLE’s: Newton’s Method• This can be extended to systems of NLE’s• Instead of derivative we use Jacobian matrix:
[Jf]ij= • Truncated Taylor series is then
f(x+s)=f(x)+Jf(x)s• And we use iteration:x0=initial guesssolve Jf(xk)sk=-f(sk) for sk
xk+1 = xk + sk
Solving NLE’s: Newton’s MethodExample: Solve f(x)=(x-2)2 -4=0Start at say x=-2
*Notice how much faster Newton’s Method converges • For linear equations it converges in one step• Why?• Newton has “quadratic convergence”• Interval bisection has “linear convergence”
Xi f(xi) f‘(xi)
-2 12 -8
-0.5 2.25 -5
-0.05 0.2025 -4.1
-0.0006 0.0024
Summary
• 2 methods for solving systems of linear equations• 2 methods for solving non-linear equations• Discussed convergence and computational efficiency
• Please contact me with any questions about this or the rest of class.
Jordan Clarkjdclark@utexas
top related