final project (print)

46
Faculty of Civil Engineering - Year 2 15SCIB02I Numerical Methods Numerical Methods Applications in Civil Engineering Submission (out of 30) Quality of Application (out of 15) Quality of Report Output (out of 15) Name I.D. Oral Discussion (out of 40) Total Mohamed Gamal 118997 Abdelrahman Khaled 121882 Reham Refky 122742 Rami Ghanima 122593 Mahd El-Din 118591 1

Upload: anonymous-awaamef

Post on 10-Jul-2016

17 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Final Project (PRINT)

Faculty of Civil Engineering - Year 2

15SCIB02I Numerical Methods

Numerical Methods Applications in Civil Engineering

Submission

(out of 30)

Quality of Application (out of 15)

Quality of Report Output (out of 15)

Name I.D. Oral Discussion (out of 40) Total

Mohamed Gamal 118997

Abdelrahman Khaled 121882

Reham Refky 122742

Rami Ghanima 122593

Mahd El-Din 118591

Supervised By

Dr. Kamal Hassan

Table Of Content:

1

Page 2: Final Project (PRINT)

Introduction…………………..……………………………………………………..3

Why Studying Numerical Method is important?………………….…….…3-4

Nonlinear equations

Bi-Section Method…………………………………...……….……………..4-6

Newton Raphson Method…………………………..……………….…7-10

Fixed-Point Method……………………………………………..……….11-12

Application…………………………………………………..……………13-15

Data

Polynomial ---> Lagrange & Newton method………….……..……16-21

Least Squares Fitting…………………………………………...……...…21-26

Integration

Simpson& Trapezoidal Method………….……….…….………………27-29

Applications………………………………………………………………29-30

Romberg's method………………………………………………………31-32

Initial Value problem ( IVP )

Euler& Runge –Kutta method………………………………………….33-35

Applications………………………………………………………...……..36-38

Conclusion………………………………...………………………………….……38

References………………………………….………………………………….….39

2

Page 3: Final Project (PRINT)

1. Introduction:

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).

One of the earliest mathematical writings is a Babylonian tablet from the Yale Babylonian Collection (YBC 7289), which gives a sexagesimal numerical approximation of \sqrt{2}, the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in astronomy, carpentry and construction.[2]

Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of \sqrt{2}, modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.

Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century also the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead. These same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations.

3

Page 4: Final Project (PRINT)

When using numerical methods or algorithms and computing with finite precision, errors of approximation or rounding and truncation are introduced. It is important to have a notion of their nature and their order. A newly developed method is worthless without an error analysis. Neither does it make sense to use methods which introduce errors with magnitudes larger than the effects to be measured or simulated. On the other hand, using a method with very high accuracy might be computationally too expensive to justify the gain in accuracy.

1.1 Why Studying Numerical Methods is important ?

The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of which is suggested by the following:

- Advanced numerical methods are essential in making numerical weather prediction feasible.

- Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.

- Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.

- Hedge funds (private investment funds) use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.

- Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.

- Insurance companies use numerical programs for actuarial analysis.

1.2 Bisection Method:

The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods.The method is also called the interval halving method, the binary search method, or the dichotomy method.

Iteration tasks:

4

Page 5: Final Project (PRINT)

Bisection method is the simplest among all the numerical schemes to solve the transcendental equations. This scheme is based on the intermediate value theorem for continuous functions.Consider a transcendental equation f (x) = 0 which has a zero in the interval [a,b] and f (a) * f (b) < Bisection scheme computes the zero, say c, by repeatedly halving the interval [a,b]. That is, starting with

c = (a+b) / 2

The interval [a,b] is replaced either with [c,b] or with [a,c] depending on the sign of f (a) * f (c) . This process is continued until the zero is obtained. Since the zero is obtained numerically the value of c may not exactly match with all the decimal places of the analytical solution of f (x) = 0 in the interval [a,b]. Hence any one of the following mechanisms can be used to stop the bisection iterations:

C1. Fixing a priori the total number of bisection iterations N i.e., the length of the interval or the maximum error after N iterations in this case is less than | b-a | / 2N.

C2. By testing the condition | ci - c i-1| (where i are the iteration number) less than some tolerance limit, say epsilon, fixed a priori.

C3. By testing the condition | f (ci ) | less than some tolerance limit alpha again fixed a priori.

Algorithm - Bisection Scheme:

Given a function f (x) continuous on an interval [a,b] and f (a) * f (b) < 0 Do

c = (a+b)/2if f (a) * f (c) < 0 then b = c

else a = cwhile (none of the convergence criteria C1, C2 or C3 is satisfied)

Advantages and disadvantages of the bisection method

- The method is guaranteed to converge- The error bound decreases by half with each iteration- The bisection method converges very slowly- The bisection method cannot detect multiple roots

5

Page 6: Final Project (PRINT)

Simple Example:

By using the bisection method solve the following equation to find a root between [1,2] with an error less than 0.05%.

x2+4 x2−10

Solution

f (1 )=−5 , f (2)=14

Let f ( x )=x2+4 x2−10

    

   

1 2 1.5 0 +Ve1 1.5 1.25 20 -Ve

1.25 1.5 1.375 9.090909091 +Ve1.25 1.375 1.3125 4.761904762 -Ve

1.3125 1.375 1.34375 2.325581395 -Ve1.34375 1.375 1.359375 1.149425287 -Ve

1.359375 1.375 1.3671875 0.571428571 +Ve1.359375 1.3671875 1.36328125 0.286532951 -Ve

1.36328125 1.3671875 1.365234375 0.143061516 +Ve1.36328125 1.36523438 1.364257813 0.071581961 -Ve1.36328125 1.36425781 1.363769532 0.035803777 -Ve

6

Xn=a+b2

error=l Xn−Xn−1Xn

x100

v (+ve)a (−ve ) f (Xn)

Page 7: Final Project (PRINT)

1.3 Newton-Raphson Method:

The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. Like so much of the dierential calculus, it is based on the simple idea of linear approximation. The Newton Method, properly used, usually homes in on a root with devastating eciency.

History:

The name "Newton's method" is derived from Isaac Newton's description of a special case of the method in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, his method differs substantially from the modern method given above: Newton applies the method only to polynomials.

He does not compute the successive approximations x_n, but computes a sequence of polynomials, and only at the end arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and makes no mention of the connection with calculus. Newton may have derived his method from a similar but less precise method by Vieta. The essence of Vieta's method can be found in the work of the Persian mathematician Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshī used a form of Newton's method to solve x^P - N = 0 to find roots of N (Ypma 1995).

A special case of Newton's method for calculating square roots was known much earlier and is often called the Babylonian method.Newton's method was used by 17th-century Japanese mathematician Seki Kōwa to solve single-variable equations, though the connection with calculus was missing. Newton's method was first published in 1685 in

A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations xn instead of the more complicated sequence of polynomials used by Newton.Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.

7

Page 8: Final Project (PRINT)

Arthur Cayley in 1879 in The Newton-Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values.

Newton-Raphson Iteration:

8

Page 9: Final Project (PRINT)

Idea behind Newton’s method

Assume we need to find a root of the equation f (x) = 0. Consider the graph of the function f (x) and an initial estimate of the root, x0. To improve this estimate, take the tangent to the graph of f (x) through the point (x0, f (x0) and let x1 be the point where this line crosses the horizontal axis.

Advantages and disadvantages of Newton’s method:

- The error decreases rapidly with each iteration- Newton’s method is very fast. (Compare with bisection method!)- Unfortunately, for bad choices of x0 (the initial guess) the method can fail to

converge! Therefore the choice of x0 is VERY IMPORTANT!- Each iteration of Newton’s method requires two function evaluations, while the

bisection method requires only one.

9

Page 10: Final Project (PRINT)

Simple Example:

10

Page 11: Final Project (PRINT)

1.4 Fixed Point

Fixed point: A point, say, s is called a fixed point if it satisfies the equation x = g(x).

Fixed point Iteration: The transcendental equation f(x) = 0 can be converted algebraically into the form x = g(x) and then using the iterative scheme with the recursive relation

xi+1= g(xi), i = 0, 1, 2, . . .,

with some initial guess x0 is called the fixed point iterative scheme.

Algorithm - Fixed Point Iteration Scheme:

Given an equation f(x) = 0

Convert f(x) = 0 into the form x = g(x)

Let the initial guess be x0

Do

xi+1= g(xi)

while (none of the convergence criterion C1 or C2 is met)

C1. Fixing apriori the total number of iterations N .

C2. By testing the condition | xi+1 - g(xi) | (where i is the iteration number) less than some tolerance limit, say epsilon, fixed apriori.

11

Page 12: Final Project (PRINT)

The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0

Condition for Convergence:

If g(x) and g'(x) are continuous on an interval J about their root s of the equation x = g(x), and if |g'(x)|<1 for all x in the interval J then the fixed point iterative process xi+1=g( xi), i = 0, 1, 2, . . ., will converge to the root x = s for any initial approximation x0 belongs to the interval J .

Simple Example:

12

Page 13: Final Project (PRINT)

1.5 Application on (Bisection, Newton-Raphson)

A uniform beam of length = 1meter.If the beam is simply supported from the right end and hinged from the other end. The beam is subjected to a linearly increasing distributed load from the left support. The deformation of the beam is given by:

y=0.2(−x5+2x3−x )

Where is measured along the beam from the right.

13

Page 14: Final Project (PRINT)

Determine the point of maximum deflection with relative error <10−3

Solution:

The deformation of the beam is given by:

y=0.2(−x5+2x3−x )

0 0.2 0.4 0.6 0.8 1 1.2

-0.06

-0.05

-0.04

-0.03

-0.02

-0.01

0

For maximum deflection dydx

=0

0.2 (−5 x4+6 x2−1 )=0

(−5 x4+6 x2−1 )=0

Let f ( x )=(−5 x4+6 x2−1 )

14

Page 15: Final Project (PRINT)

-1.5 -1 -0.5 0 0.5 1 1.5

-1.2

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

Chart Title

f (0 )=−1−ve f (0.5 )=0.8+ve

There exist one root in the interval [0,0.5].

x 0=0+0.52

=0.25

f (x )=−20 x3+12 x

a) Newton-Raphson Method:

No. X

15

error= Xn−Xn−1Xn

xn

Page 16: Final Project (PRINT)

X0 0.25

X1 0.489825581 48.96142433

X2 0.446807192 -9.627953767

X3 0.447213596 0.090874758

X4 0.447213595 -7.50207E-08

X5 0.447213595 2.48253E-14

b) Bisection Method:

         

0 0.5 0.25 0 -Ve0.25 0.5 0.375 33.33333333 -Ve

0.375 0.5 0.4375 14.28571429 -Ve0.4375 0.5 0.46875 6.666666667 -Ve

0.46875 0.5 0.484375 3.225806452 -Ve0.484375 0.5 0.4921875 1.587301587 -Ve

0.4921875 0.5 0.49609375 0.787401575 -Ve0.49609375 0.5 0.498046875 0.392156863 -Ve

0.498046875 0.5 0.499023438 0.195694716 -Ve0.499023438 0.5 0.499511719 0.097751761 -Ve0.499511719 0.5 0.49975586 0.048851953 -Ve0.49975586 0.5 0.49987793 0.024420062 -Ve0.49987793 0.5 0.499938965 0.01220849 -Ve

0.499938965 0.5 0.499969483 0.006103873 -Ve0.499969483 0.5 0.499984742 0.003051893 -Ve0.499984742 0.5 0.499992371 0.001525923 -Ve

1.6 La Grange & Newton Method (Interpolation):

Introduction:

The idea and practice of interpolation has a long history going back to antiquity and extending to modern times. We will briefly sketch the early development of the subject in ancient times and the middle ages through the 17th century, culminating in the work of Newton. We next draw attention to a little-known paper of Waring predating Lagrange’s interpolation formula by 16 years. The rest of the paper deals with a few

16

Xn=a+b2

error= Xn−Xn−1Xn

x100v (+ve)a (−ve ) f (Xn)

Page 17: Final Project (PRINT)

selected contributions made after Lagrange till recent times. They include the computationally more attractive barycentric form of Lagrange’s formula, the theory of error and convergence based on real-variable and complex-variable analyses, Hermite and Hermite-Fejér as well as nonpolynomial interpolation. Applications to numerical quadrature and the solution of ordinary and partial differential equations are briefly indicated. As seems appropriate for this auspicious occasion, however, we begin with Lagrange himself.

In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points and numbers , the Lagrange polynomial is the polynomial of the least degree that at each point assumes the corresponding value (i.e. the functions coincide at each point). The interpolating polynomial of the least degree is unique, however, and it is therefore more appropriate to speak of "the Lagrange form" of that unique polynomial rather than "the Lagrange interpolation polynomial", since the same polynomial can be arrived at through multiple methods. Although named after Joseph Louis Lagrange, who published it in 1795, it was first discovered in 1779 by Edward Warring and it is also an easy consequence of a formula published in 1783 by Leonhard Euler.[1]

Lagrange interpolation is susceptible to Runge's phenomenon, and the fact that changing the interpolation points requires recalculating the entire interpolant can make Newton polynomials easier to use. Lagrange polynomials are used in the Newton–Cotes method of numerical integration and in Shamir's secret sharing scheme in cryptography.

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.

But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.

Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the tr```ue function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.[2]

The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.

17

Page 18: Final Project (PRINT)

Proof:

The function L(x) being sought is a polynomial in   of the least degree that interpolates the given data set; that is, assumes value   at the corresponding   for all data points  :

Observe that:

18

Page 19: Final Project (PRINT)

1. In   there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must also be a k-degree polynomial.

2.We consider what happens when this product is expanded. Because

the product skips  , if   then all terms are   (except where  , but that case is impossible, as pointed out in the definition section—in that term,  , and since  ,  , contrary to  ). Also if   then since   does not preclude it,

one term in the product will be for  , i.e.  , zeroing the entire product. So

1.

where   is the Kronecker delta. So:

Thus the function L(x) is a polynomial with degree at most k and where  .

Additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at polynomial interpolation article.

Main Idea:

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L(x) =∑j=0k x  j mj, we must invert the Vandermonde matrix (xi )  j to solve L(xi) = yi for the coefficients mj of L(x). By choosing a better basis, the Lagrange basis, L(x) = ∑j=0k lj(x) yj, we merely get the identity matrix, δij, which is its

19

Page 20: Final Project (PRINT)

own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

1.7 Application on La Grange & Newton Method:

To maximize a catch of bass in a lake, it is suggested to throw the line to the depth of the thermocline. The characteristic feature of this area is the sudden change in temperature. We are given the temperature vs. depth data for a lake in Table 1.

Temprature ,T© Depth ,z(m)

19.1 0

19.1 -1

19 -2

18.8 -3

18.7 -4

18.3 -5

18.2 -6

17.6 -7

11.7 -8

9.9 -9

9.1 -10

20

Page 21: Final Project (PRINT)

Figure 1 Temperature vs. depth of a lake.Using the given data, we see the largest change in temperature is between z −8 mand z −7 m. Determine the value of the temperature at z −7.5 m using Newton’sdivided difference method of interpolation and a first order polynomial.Solution:% Newotons devided difference methodclcclear all % x=[15 20 22.5 30];% y=[362.78 517.35 602.97 901.97];x=[-7 -8 -9 -10];y=[17.6 11.7 9.9 9.1]; for j=1:length(x)-1 f(j,1)=(y(j+1)-y(j))/(x(j+1)-x(j)) end for k=1:length(x)-2 f(k,2)=(f(k+1,1)-f(k,1))/(x(k+2)-x(k)) end for i=1:length(x)-3 f(i,3)=(f(i+1,2)-f(i,2))/(x(i+3)-x(i)) end

21

Page 22: Final Project (PRINT)

X=-7.5; %desired point to get the function value at p1=y(1)+f(1,1)*(X-x(1)) p2=p1+f(1,2)*(X-x(1))*(X-x(2)) p3=p2+f(1,3)*(X-x(1))*(X-x(2))*(X-x(3))

1.8 Least Squares Fitting or Regression analysis:

Definition: "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve.

Easily explained as a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the residuals to be treated as a continuous differentiable quantity. However, because squares of the offsets are used, outlying points can have a disproportionate effect on the fit, a property which may or may not be desirable depending on the problem at hand.

The basic problem is to find the best fit straight line y = ax + b given that, for n ∈ {1, N}, the pairs (xn, yn) are observed. The method easily generalizes to finding the best fit of the form y = a1f1(x) + · · · + cKfK(x)

Description of the Problem

Often in the real world one expects to find linear relationships between variables. For example, the force of a spring linearly depends on the displacement of the spring: y = kx (here y is the force, x is the displacement of the spring from rest, and k is the spring constant). To test the proposed relationship, researchers go to the lab and measure what the force is for various displacements. Thus they assemble data of the form (xn,yn) for n ∈ {1, . . . , N}; here yn is the observed force in Newton’s when the spring is displaced Xn meters.

22

Page 23: Final Project (PRINT)

Unfortunately, it is extremely unlikely that we will observe a perfect linear relationship. There are two reasons for this. The first is experimental error; the second is that the underlying relationship may not be exactly linear, but rather only approximately linear. See (Figure 1) for a simulated data set of displacements and forces for a spring with spring constant equal to 5. The Method of Least Squares is a procedure, requiring just some calculus and linear algebra, to determine what the “best fit” line is to the data. Of course, we need to quantify what we mean by “best fit”, which will require a brief review of some probability and statistics. A careful analysis of the proof will show that the method is capable of great generalizations. Instead of finding the best fit line, we could find the best fit given by any finite linear combinations of specified functions. Thus the general problem is given functions f1, fK, find values of coefficients a1, aK such that the linear combination

y = a1f1(x) + · · · + aKfK(x)................ (1.1) is the best approximation to the data

Linear curve fitting (linear regression):

Given the general form of a straight line

How can we pick the coefficients that best fit the line to the data?

First question: What makes a particular straight line a ‘good’ fit?

23

Page 24: Final Project (PRINT)

Why does the blue line appear to us to fit the trend better?

• Consider the distance between the data and points on the line

• Add up the length of all the red and blue vertical lines

• This is an expression of the ‘error’ between data and fitted line

• The one line that provides a minimum error is then the ‘best’ straight line

Quantifying error in a curve fit assumptions:

1) Positive or negative error has the same value (data point is above or below the line) 2) Weight greater errors more heavily we can do both of these things by squaring the distance

24

Page 25: Final Project (PRINT)

Denote data values as (x, y) denote points on the fitted line as (x, f(x)) sum the error at the four data points

Our fit is a straight line, so now substitute F(x)= ax+b

The ‘best’ line has minimum error between line and data points this is called the least squares approach, since we minimize the square of the error.

Take the derivative of the error with respect to and , set each to zero

Solve for the and so that the previous two equations both = 0 re-write these two equations

Put these into matrix form

25

Page 26: Final Project (PRINT)

2.1 Application on Least Squares Fitting

Time (hr) 1 2 3 4 5 6 8 10 12 18 24

S(m) 0 0.16 0.59 0.77 0.9

8 1.1 1.24 1.44

1.56 2.04 2.1

5

26

Page 27: Final Project (PRINT)

0 5 10 15 20 25 300

0.5

1

1.5

2

2.5

Liner Model

Equations:

A0 Equation 1A1 Equation 2

27

Page 28: Final Project (PRINT)

2.2 Integration Simpson Method Trapezoidal Method

Area under a given curve = ∫a

b

f ( x )dx where f(x) is the equation of the

curve , and a and b are the limits of the curve.

Composite Simpsons rule :

1 32

1

4

2

0 4 ( ) 4 ( )2 ( ) 2 ( )( ) [

4 (

( )3

( )]2 ( ) )

b

a

n n n

f x f x

f x

f x f xhf x dx f

x

x

xf f

Where h = b−an

The method is credited to the mathematician Thomas Simpson (1710–1761) of Leicestershire, England. If the interval of integration   is in some sense "small", then Simpson's rule will provide an adequate approximation to the exact integral.

If the function being integrated is not smooth over the interval. Typically, this means that either the function is highly oscillatory, or it lacks derivatives at certain points. In these cases, Simpson's rule may give very poor results. One common way of handling this problem is by breaking up the interval   into a number of small subintervals. Simpson's rule is then applied to each subinterval, with the results being summed to produce an approximation for the integral over the entire interval. This sort of approach is termed the composite Simpson's rule.

Error in Simpson’s rule

The error in approximating an integral by Simpson's rule is

where   is some number between   and  .

The error is asymptotically proportional to . However, the above derivations suggest an error proportional to . Simpson's rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval [a, b].Since the error term is proportional to the fourth derivative of f (x) at  , this shows that Simpson's rule provides exact results for any

28

Page 29: Final Project (PRINT)

polynomial f of degree three or less, since the fourth derivative of such a polynomial is zero at all pointsComposite trapezoidal rule

 Trapezoidal rule (also known as the trapezoid rule or trapezium rule) is a technique for approximating the definite integral. The trapezoidal rule works by approximating the region under the graph of the function   as a trapezoid and calculating its area.

0 1 2 12 ( ) 2 ( ) 2 ( )( ) ( ) ( )2

b

na

nhf x dx f f x f x fx xx f ∫

Where h = b−an

Error in trapezoidal The error of the composite trapezoidal rule is the difference between the value of the integral and the numerical result:

There exists a number ξ between a and b, such that

It follows that if the integrand is concave up (and thus has a positive second derivative), then the error is negative and the trapezoidal rule overestimates the true value. This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, but none is counted above. If the interval of the integral being approximated includes an inflection point, the error is harder to identify.

Comparison

In general Simpson’s rule has faster convergence than the trapezoidal rule for functions which are twice continuously differentiable, though not in all specific cases. However for various classes of rougher functions (ones with weaker smoothness conditions), the trapezoidal rule has faster convergence in general than Simpson's rule. When using Simpson’s rule

29

Page 30: Final Project (PRINT)

the number of intervals (n) has to be even. Simpson is a better choice if the shape of the function is curved not broken lines.

2.3 Application using Simpson’s method:

A civil engineer wants to cover the horizontal surface of a big swimming pool with certain ceramic tiles. The pool was surveyed relative to the whole land as shown in Figure. Knowing that each ceramic tile is 120cm x 120cm, and each carton consists of 6 ceramic tiles. Determine the number of cartons the engineer should buy to cover such surface, taking 10% extra for safety. Simpson’s method was used as the shape of the pool is curved so Simpson’s rule will give more accurate results.

A1=12.83 {43.77+34.54+2(29.81+31.97+34.26+23.35)

+4(32.22+29.54+35.43+28.68+22.76)} = 3889.536m2

A2 = 163 {36.23+45.46+ 2(20.48+18.55+18.95)

+4(20.6+20.71+21.42+22.54)} = 2877.067m2

A3 = 128 X 80 = 10240 m 2

AT = A3 -A2 –A1=3473.397 m2

Total tiles = 3473.3971.2 x 1.2 = 2412.08

30

Page 31: Final Project (PRINT)

Total cartoons = 2412.086 = 402.01

Total safe Cartons = 402.01x1.1 = 442.21 = 443

2.4 Application using both Simpson and Trapezoidal methods:

An engineering company wants to buy a piece of land for its headquarters office with available budget 150000 LE (Egyptian Pounds). Two adjacent lands are available as seen in Figure, separated by a fence: ABEFA (A1) at a price of 3 LE/m2, while the other BCDEB (A2) at a price of 2.5 LE/m2. State which land the company could buy? Verify your answer using calculations.

Solution

S EBC = 315+413.5+321.752

=525.125

A EBC = 50049.498 m2 similarly A ABE = 34869.45 m2

A EDC= 453 (27.2+50.6+2(42.9+47.7) +4(46.5+51.1+48.8)) +0.5x50.6x45=

13807.5 m2

31

Page 32: Final Project (PRINT)

A AFE = 352 X (33.3 + 35.6 + 2(65.3+41.1+71.4+50.6+59.6+35.2)) =

12517.75 m2

A BCDEB = A EBC + A EDC = 63857 m2

Price of BCDEB = 63857 m2 X 2.5 = 159642.5 LE

A ABEFA= A ABE+A AFE = 47387.2 m2

Price of ABEFA = 47387.2 X 3 = 142161.6 LE

Price of A BCDEB> 15000 LE>PRICE of A ABEFA

2.5 Romberg's method:

In numerical analysis, Romberg's method (Romberg 1955) is used to estimate the definite integral

by applying Richardson extrapolation (Richardson 1911) repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangular array. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points. The integrand must have continuous derivatives, though fairly good results may be obtained if only a few derivatives exist.The method is named after Werner Romberg (de) (1909–2003), who published the method in 1955.Using

the method can be inductively defined by

2.6 Application on Romberg's method:

X 0 16 32 48 64 80 96 112 128

F(x) 36.2 20.6 20.84 20.71 18.55 21.41 18.95 22.45 45.4632

Page 33: Final Project (PRINT)

H1= 128 H2= 64 H3= 32 H4= 16

I1= 64 X (36.2+ 45.46) = 5226.24

I2 = 32 X (36.2+ (2X18.55) + 45.46) = 3800.32

I3 = 16 X (36.2 + (2X18.55) + (2X20.84) + (2X18.95) +45.46) = 3173.44

I4= 8 X (36.2 + (2X18.55) + (2X20.84) + (2X18.95) + (2X20.6) + (2X22.54) + (2X20.71) +45.46) = 2608.32

1st iteration = 4 I 2−I 13 = 4 X 3800.32−5226.24

3=3325.0133

2nd iteration = 16 I 2−I 115 = 16 X 2964.48−3325.0133

15 = 2940.444

3RD iteration = 64 I 2−I 163 = 64 X 2383.644−2940.444

63 = 2374.806

Number of interval

Step size (H)

Trapezoidal estimate

1st iteration

2nd iteration

3rd iteration

1 128 5226.24 3325.0133

2964.48

2419.9466

2940.444 2374.806

2 64 3800.32

3 32 3173.44 2383.644

4 16 2608.32

33

Page 34: Final Project (PRINT)

2.7 Euler and Rung kutta method :

Biography:

Leonhard Euler Mathematician (1707–1783)

Leonhard Euler was an 18th century physicist and scholar who were responsible for developing many concepts that are an integral part of modern mathematics.

SynopsisBorn on April 15, 1707, in Basel, Switzerland, Leonhard Euler was one of math's most pioneering thinkers, establishing a career as an academy scholar and contributing greatly to the fields of geometry, trigonometry and calculus, among many others. He released hundreds of articles and publications during his lifetime, and continued to publish after losing his sight. He died on September 18, 1783.

Early Life and EducationLeonhard Euler was born on April 15, 1707, in Basel, Switzerland. Though originally slated for a career as a rural clergyman, Euler showed an early aptitude and propensity for mathematics, and thus, after studying with Johan Bernoulli, he attended the University of Basel and earned his master's during his teens. Moving to Russia in 1727, Euler served in the navy before joining the St. Petersburg Academy as a professor of physics and later heading its mathematics division.

He wed Katharina Gsell in early 1734, with the couple going on to have many children, though only five lived past their father. The couple was married for 39 years until Katharina's death, and Euler remarried in his later years to her half-sister.

34

Page 35: Final Project (PRINT)

In 1736, he published his first book of many, Mechanical. By the end of the decade, having suffered from fevers and overexertion due to cartography work, Euler was severely hampered in the ability to see from his right eye.

Definitions of complex exponentiation

The exponential function ex for real values of x may be defined in a few different equivalent ways (see Characterizations of the exponential function). Several of these methods may be directly extended to give definitions of ez for complex values of z simply by substituting z in place of x and using the complex algebraic operations. In particular we may use either of the two following definitions which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving the unique analytic continuation of ex to the complex plane.Power series definition

For complex z

Using the ratio test it is possible to show that this power series has an infinite radius of convergence, and so defines ez for all complex z.Limit definition

For complex z

Runge–Kutta methodsIn numerical analysis, the Runge–Kutta methods are a family of implicit and explicit iterative methods used in temporal discretization for the approximate solutions of ordinary differential equations. These methods were developed around 1900 by the German mathematicians C. Runge and M. W. Kutta.See the article on numerical methods for ordinary differential equations for more background and other methodsHe Runge-Kutta Method was developed by two German men Carl Runge (1856-1927), and Martin Kutta (1867- 1944) in 1901. Carl Runge developed numerical methods for solving the differential equations that arose in his study of atomic spectra. These numerical methods are still used today. He used so much mathematics in his research that physicists thought he was a mathematician, and he did so much physics that mathematicians thought he was a physicist. Today his name is associated with the Runge-Kutta methods to numerically solve differential equations. Kutta, another German applied mathematician, is also remembered for his contribution to the differential equations-based Kutta-Joukowski theory of airfoil lift in aerodynamics.

35

Page 36: Final Project (PRINT)

The Runge–Kutta method

The most widely known member of the Runge–Kutta family is generally referred to as "RK4", "classical Runge–Kutta method" or simply as "the Runge–Kutta method".Let an initial value problem be specified as follows.

Here, y is an unknown function (scalar or vector) of time t which we would like to approximate; we are told that , the rate at which y changes, is a function of t and of y itself. At the initial time   the corresponding y-value is . The function f and the data    are given.Now pick a step-size h>0 and define

For n = 0, 1, 2, 3, . . . , using

 [1]

(Note: the above equations have different but equivalent definitions in different texts).[2]

Here   is the RK4 approximation of  , and the next value ( ) is determined by the present value ( ) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation.

 is the increment based on the slope at the beginning of the interval, using  , (Euler's method) ;

 is the increment based on the slope at the midpoint of the interval, using   ;

 is again the increment based on the slope at the midpoint, but now using   ;

 Is the increment based on the slope at the end of the interval, using   .

36

Page 37: Final Project (PRINT)

In averaging the four increments, greater weight is given to the increments at the midpoint. If   is independent of , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule. The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of  , while the total accumulated error is order .

2.8 Application on Euler and Rung kutta method :

Use Euler and Rung kutta method of order 2 to solve the following problem

Water in a cylindrical tank is draining through a hole in the water tank, the rate at wish the water level drops is:

dydt

=−k∗Y Where k is a constant depending on the shape of the hole and the cross-sectional area of the tank and drain whole.

If K = -0.06 M=7Use a step of 1.5 minutes.

37

Page 38: Final Project (PRINT)

38

Page 39: Final Project (PRINT)

2.9 Application#2 on Euler and Rung kutta method :

A submarine is at sea bed level and will start to float to the surface with the rate of x=0, y= 1

f ( x , y )= y+x

The submarine rises with a constant of 1.5 m each 1 min

Use Euler and Rung Kutta Method of order 2 to solve this application

39

Page 40: Final Project (PRINT)

Conclusion

Both the analytical and numerical methods yielded the same result which means that the numerical method is effective and can be applied in different ways and methods. We have tried to show several application and examples to all methods to clarify the differences between each method and its advantages and disadvantages.

References:

- Main Textbook: Numerical Methods for Engineers by: Steven C. Chapra, Raymond P. Canal. McGraw-Hill

- www.wikipedia.org - www.math.uiowa.edu- www.mat.iitm.ac.in- www.math.ust.hk- www.math.ubc.ca

40

Page 41: Final Project (PRINT)

41