# Numerical Integration: Summary and Conclusions

In this lab we looked at some various methods to compute integrals.
Those methods were, Riemann Sums: left rectangle, midpoint, and right rectangle, Trapezoidal, and Simpsons Rule.

We tested various methods and got results that showed that Simpsons Rule was very accurate, in many cases exact for cubic functions. We explored this more saw that Simpsons rule and the actual integration yielded the same result. This was also a proof of our conjectures about this method Finally we showed where the Simpsons Rule came from.

Now with all of that there are many other things that can be explored. Exploring more functions than cubics, (trig functions, logarithmic, exponential, etc.) and seeing which methods work best for those. Other routes one might take would be to explore higher dimensions. Some questions that could be explored would be how to test the accuracy for functions that don’t have an antiderivative? What methods would seem to work best? How small of an interval would be needed?

# Equation for Simpsons Rule: Proof and Derivation

We get in the book that an equation for a parabola for any three points is

$q(x)=A\frac {(x-b)(x-c)}{(a-b)(a-c)}+ B \frac{(x-a)(x-c)}{(b-a)(b-c)} +C\frac{(x-a)(x-b)}{(c-a)(c-b)}$

This is the formula for some parabola $q= mx^2+nx+p$ through the points, (a, A), (b,B), (c,C), we will take this as a given and go from there.

First we need to say we are working on a interval [-h, h]. Then our three points will be (-h, f(-h)), (0,f(0)), (h,f(h)).

First we write a generic equation for this quadratic. We can do this using the formula given and replacing the values:
$q(x)=f(-h)\frac {(x)(x-h)}{(-h)(-2h)}+ f(0) \frac{(x+h)(x-h)}{(h)(-h)} +f(h)\frac{(x+h)(x)}{(2h)(h)}$

Reducing this we get:

$q(x)=f(-h)\frac {x^2-xh}{2h^2}+ f(0) \frac{(x^2-h^2)}{-h^2} +f(h)\frac{x^2+xh}{2h^2}$

Now we integrate from [-h, h].

$\int^h_{-h} f(-h)\frac {x^2-xh}{2h^2}+ \int^h_{-h} f(0) \frac{(x^2-h^2)}{-h^2} +\int^h_{-h} f(h)\frac{x^2+xh}{2h^2}$

$=f(-h) (\frac{x^3}{6h^2} – \frac{x^2}{4h}) |_{-h}^h + f(0) (\frac{x^3}{-h^2} + x) |_{-h}^h + f(h) (\frac{x^3}{6h^2} + \frac{x^2}{4h}) |_{-h}^h$

Just taking the first integral:

$f(-h)((\frac{h^3}{6h^2} – \frac{h^2}{4h})-(\frac{-h^3}{6h^2} – \frac{h^2}{4h}))$

$=f(-h)\frac{h}{3}$

Evaluating the other integrals and reducing we get

$f(0)\frac{4}{3}h$ and $f(h)\frac{h}{3}$

So we have: $f(-h)\frac{h}{3} + f(0)\frac{4}{3}h +f(h)\frac{h}{3}$
$=\frac{h}{3}[f(-h)+4f(0)+f(h)]$

Now the points if you change the points we were working with to

$(x_{i-1},f(x_{i-1})),(x_i,f(x_{i})), (x_{i+1},f(x_{i+1}))$

we get
$=\frac{h}{3}[f(x_{i-1})+4f(x_{i})+f(x_{i+1})]$

This is the formula for Simpson’s Rule. This has major implications. It shoes how accurate it can be. This is accurate because when you have three points and a function that goes through them Simpsons rule finds a best fit quadratic for that region. Then we see that the formula does not depend on the actual equations themselves, but the points that one is evaluating them at and the interval they are going across.

# Numerical Integration- Simpson’s Rule

If you’ve kept up on recent blog posts, you’ll notice that in our data one method for integration seemed to work without any error. This method was Simpson’s Rule. This fact sparked our interest as to why does this method work so well? In particular, we noticed that it was almost flawless with cubic functions. Why is this? From the text, we were given the idea to algebraically compute Simpson’s rule approximation and compare it to the actual integral of the special case in which our interval is [-h,h] and n, our number of rectangles, being 2 for $f(x)=ax^{3}+bx^{2}+cx+d$.

We began by computing the actual integration:

$\int\limits_{-h}^{h} (ax^{3}+bx^{2}+cx+d) dx$ $\implies$ ($\frac{ax^{4}}{4}$ + $\frac{bx^{3}}{3}$ + $\frac{cx^{2}}{2}$ + $dx$ + $C$)|$^{h}_{-h}$

[$\frac{ah^{4}}{4}$ + $\frac{bh^{3}}{3}$ + $\frac{ch^{2}}{2}$ + $dh$ + $C$]-[$\frac{a(-h)^{4}}{4}$ + $\frac{b(-h)^{3}}{3}$ + $\frac{c(-h)^{2}}{2}$ + $d(-h)$ + $C$]

$= \frac{2bh^{3}}{3} + 2dh$

The equation for Simpson’s Rule is $\frac{1}{3}[f(x_{i-1})+4f(x_{i})+f(x_{i+1})]h$

In using two rectangles we get: $\frac{h}{3}[f(-h)+4f(0)+f(h)]$ $\implies$ $\frac{h}{3}[(a(-h)^{3}+b(-h)^{2}+c(-h)+d)+4d+(a(h)^{3}+b(h)^{2}+c(h)+d)]$

$= \frac{h}{3}(2bh^{2}+6d)= \frac{2bh^{3}}{3} + 2dh$

And thus we find that in cubic functions Simpson’s rule is the most accurate method for computing the integration.

For further research we can look at what happens when we change the bounds to [a,b]. We can also calculate the actual integral and the Simpson’s rule for different types of functions too see if this method really holds up as being the most accurate method for numerical integration.

# Numerical Integration Analysis — Data!

We wanted to follow up with a post that contains a dump of our data. This includes, essentially, our percent error from the expected value of the given test functions:

• $\cos{x}$ over [0, $\pi{}$]
• $2x + 1$ over [0, 1]
• $4-x^2$ over [0, 2]
• $5x^3 – 6x^2 + 0.3x$ over [-1, 3]
• $x^3$ over [-1, 3]
• $x^3 -27x^2 + 8x$ over [0, 3]

We tested 5 different deltas (rectangle widths), $dx$, namely, $0.1$, $0.01$, $0.001$, $0.0001$, $0.00001$. But we are not going to put tables for each method and each delta; it’s just too much. However, we will do the first delta ($0.1$) and the last delta ($0.00001$).

### Summary of Methods for $\cos{x}$ over [0, $\pi{}$]

Method Delta Percent Error
Trapezoidal $0.100000$ -0.33364
Trapezoidal $0.000010$ -0.00000
Midpoint $0.100000$ -0.20893
Midpoint $0.000010$ -0.00000
Simpsons $0.100000$ 0.05475
Simpsons $0.000010$ 0.00000
Left Rectangle $0.100000$ -4.97995
Left Rectangle $0.000010$ -0.00050
Right Rectangle $0.100000$ 4.31267
Right Rectangle $0.000010$ 0.00050

### Summary of Methods for $2x + 1$ over [0, 1]

Method Delta Percent Error
Trapezoidal $0.100000$ -14.50000
Trapezoidal $0.000010$ -0.00150
Midpoint $0.100000$ -14.50000
Midpoint $0.000010$ -0.00150
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ 0.00000
Left Rectangle $0.100000$ -10.00000
Left Rectangle $0.000010$ -0.00100
Right Rectangle $0.100000$ -19.00000
Right Rectangle $0.000010$ -0.00200

### Summary of Methods for $4-x^2$ over [0, 2]

Method Delta Percent Error
Trapezoidal $0.100000$ -0.42813
Trapezoidal $0.000010$ -0.00000
Midpoint $0.100000$ -0.33906
Midpoint $0.000010$ -0.00000
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -3.81250
Left Rectangle $0.000010$ -0.00038
Right Rectangle $0.100000$ 2.95625
Right Rectangle $0.000010$ 0.00037

### Summary of Methods for $5x^3 – 6x + 0.3x$ over [-1, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -16.93086
Trapezoidal $0.000010$ -0.00181
Midpoint $0.100000$ -17.10882
Midpoint $0.000010$ -0.00181
Simpsons $0.100000$ -0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -7.67699
Left Rectangle $0.000010$ -0.00078
Right Rectangle $0.100000$ -26.18473
Right Rectangle $0.000010$ -0.00284

### Summary of Methods for $x^3$ over [-1, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -14.87537
Trapezoidal $0.000010$ -2.44034
Midpoint $0.100000$ -15.01091
Midpoint $0.000010$ -2.44034
Simpsons $0.100000$ -2.43902
Simpsons $0.000010$ -2.43902
Left Rectangle $0.100000$ -8.68293
Left Rectangle $0.000010$ -2.43966
Right Rectangle $0.100000$ -21.06780
Right Rectangle $0.000010$ -2.44102

### Summary of Methods for $x^3 – 27x^2 + 8x$ over [0, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -9.88570
Trapezoidal $0.000010$ -0.00103
Midpoint $0.100000$ -9.97363
Midpoint $0.000010$ -0.00103
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -5.08032
Left Rectangle $0.000010$ -0.00051
Right Rectangle $0.100000$ -14.69108
Right Rectangle $0.000010$ -0.00154

Of course, we must mention that there is some rounding in the percent errors. Simpsons, Midpoint, and Trapezoidal methods are not perfect.

# Numerical Integration Analysis Part 2

In addition to looking at the percent errors of different methods for integrating, our group also wanted to explore the effectiveness of the various methods for different types of functions. We were guided to explore an example of trigonometric, linear, quadratic, and cubic functions, but wanted to see if patterns emerged within these different types of functions and ultimately if a particular method of integration gives more accurate estimations given a specific type of function.

The actual data we collected, or a detailed summary of the data can be seen here. From this data we found that Simpson’s method is by far the best method of the five methods we explored, particularly for cubic functions. For symmetric functions such as trigonometric functions as well as quadratic functions, the midpoint method is also very accurate. Trapezoidal method provides an accurate estimation especially for small deltas, but has trouble with cubic formulas. The other Reimann sum methods provide over or underestimates depending on the type of function but, especially for larger deltas, are not as accurate as the other methods.

# Numerical Integration Analysis

Now that we have a number of numerical methods implemented, we want to compare them to see which method is best and in what circumstances.

We have a few test functions we were trying these methods over. Namely the following:

• $\cos{x}$ over [0, $\pi{}$]
• $2x + 1$ over [0, 1]
• $4-x^2$ over [0, 2]
• $5x^3 – 6x^2 + 0.3x$ over [-1, 3]

### Summary of Methods for $\cos{x}$ over [0, $\pi{}$]

Method Delta Percent Error
Midpoint $0.1$ -0.208927
Simpsons $0.1$ 0.054748
Right Rectangle $0.1$ 4.31267
Riemann $0.1$ 5.020046
Trapezoidal $0.1$ -0.33364
Left Rectangle $0.1$ -4.979954

### Summary of Methods for $5x^3 – 6x^2 + 0.3x$ over [-1, 3] (*)

Method Delta Percent Error
Midpoint $0.00001$ -0.00181
Simpsons $0.00001$ 0.00000
Right Rectangle $0.00001$ -0.00284
Riemann $0.00001$ -0.00103
Trapezoidal $0.00001$ -0.00181
Left Rectangle $0.00001$ -0.00078

Between these two sets of data, Let’s take a closer look at the magnitudes of the percent errors to see which method is more correct and then rank them.

Looking back at the first table, we can fairly easily tell that Simpsons rule is the best and Riemman sums was the worst. A little more difficult to pull out an order, so we made the computer compute the order: Simpsons, Midpoint, Trapezoidal, Right Rectangle, Left Rectangle, Riemann.

And same for the second table, we can easily see Simpsons was the best and Right Rectangle was the worst. And the order: Simpsons, Left Rectangle, Riemann, Trapezoidal, Midpoint, Right_rectangle.

Remember, when looking at the order between these two functions, we cannot say anything about each of the methods because we are changing two things in the comparison (the width of the rectangles summed and the function).

Now let’s take a closer look at the overall most correct method for all functions for each delta. That is, we will be varying the delta and looking at which method was the best.

Looking at a delta of $0.1$, we see that the Simpsons method is the most accurate for all our test functions. Interestingly though, with a delta of $0.01$ and $0.001$, the Midpoint method is better than Simpsons for $\cos{x}$, but Simpsons is still better for the other three. Moving to a delta of $0.0001$, we see Simpsons method is best for all functions again and remains to be for $0.00001$ as well.

So far, we can see that Simpsons method is amazing at single variable integration. But we will want to know how good? What’s the relative rates of accuracy increase between the methods?

Look for a follow up post where we post more data about our analysis and try to answer the above questions.

* Simpsons method looks to be $0.000$ here; this is the result of some rounding for presentation. The actual value is really close to zero but not quite zero.

# Numerical Integration Methods Part 2

Within our lab, we plan to explore, technically 6, different methods of numerical integration, though a couple of the methods are very similar or even just different variations of the same method.  While Luke’s post will tell you all about why numerical integration is important, and why you would want to have numerical ways to compute integrals, I will provide some of the important definitions to help you remember the specifics about the different types of numerical methods of integration and the different advantages and disadvantages each method has. I’ll be covering the left-hand sum, right-hand sum, and Simpson’s rule.

The left and right-hand sums are actually variations of the Riemann sum method. All methods approximate the curve by finding the area of rectangles that cover a similar area as the curve.  As Kenny will elaborate on in his definition of the Riemann sum, these methods are calculated by dividing the interval over which a given function is to be integrated, into subintervals which will serve as the base of the rectangles which will be formed. The height of the rectangles is determined by picking a point on the function. Then the areas of each rectangle is found and added together. The sum of the areas is the approximation of the integral.

Left-Hand Sum

The distinguishing factor for a specific type of sum is the point used from which the height is determined. In the left-hand sum, the left-hand end-point of the sub interval is used as to determine the height. This means the length of the rectangle is found by extending the point a the left-hand side of the subinterval to the function. A picture I found on Wikipedia helps illustrate this concept:

As can be seen in the illustration, for monotonically increasing functions, the left-hand sum approximation is an underestimate of the integral. For monotonically decreasing functions, this method provides and over-estimation.

The formula for this method is as follows:

$L_n = \sum_{i=0}^{n-1}f(x)d$

Where f(x) is the function we’re integrating and d is the width of the subinterval.

Right-Hand Sum

Conversely, the right-hand sum method, uses the right-hand point of the subintervals. And can be pictured as:

With a formula of: $R_n = \sum_{i=1}^{n}f(x)d$

Where f(x) is the function we’re integrating and d is the width of the subinterval.

Wikipedia actaully had a nice summary if you’d like more explanation: http://en.wikipedia.org/wiki/Riemann_sum#Left_sum

Simpson’s rule

SImpson’s rule allows us to compute an integral using quadratic polynomials. This method, like the other methods, separates the area that is to be integrated into subintervals, but differs in the sense that it finds the area of parabolas that encompass the subinterval, by using quadratic polynomials which approximate the function.

Simpson’s rule is formally defined as: $\frac{(b-a)}{6}[f(a)+4f(\frac{a+b}{2})+f(b)]$ and it is very accurate when calculating integrals of polynomials to a cubic degree.

Wolrfram MathWorld has a very succinct description of Simpsons rule which can be found at: http://mathworld.wolfram.com/SimpsonsRule.html.

Also, an interesting fact from Wikipedia about Simpson’s rule, this rule is widely used by naval architects to numerically integrate hull offsets and cross-sectional areas to determine volumes and centroids of ships or lifeboats.

http://en.wikipedia.org/wiki/Simpson%27s_Rule

# Numerical Integration Methods

As a follow up on our motivation, I will be introducing a few of the methods that we will be testing. Namely, this post will introduce Riemann sums, Trapezoidal sums, and the Midpoint method. I wanted to put these three together because they are very similar in computation (we do not know how similar they are in accuracy though).

### Riemann Sums

The Riemann sums method is one of the simplest methods to compute definite integrals. All this method does is sum the evaluation of the function at some point, $x_i$, multiplied by some small value, $d$. This gives us the following equation:

$$\sum_{i=1}^{n}{f(x_i)d}$$ where $n$ is the number of elements in the range of our interval.

An example of what this may look like using Python/Sage code:

 import numpy f = lambda x: x**3 # Some function f a, b = (0, 3) # Some interval d = 0.001 # some small delta value numpy.sum((f(x)*d) for x in numpy.arange(a, b, d))) 

Riemann Sums Method. Src: Wikipedia

### Trapezoidal Sums

The Trapezoidal Sums method is similar to the Riemann sums method as in it computes the sums of the function evaluated at some point, $x_i$. However, each term that is summed is the average between two points. That is, our sum looks like the following:

$$\sum_{i=1}^{n-1}{(f(x_i)+f(x_{i+1})d/2}$$ where $n$ again is the number of elements in the range of our interval.

An example of what this may look like in Python/ Sage code:

 f = lambda x: x**3 # Some function f a, b = (0, 3) # Some interval d = 0.001 # Some small delta value x = np.arange(a, b, d) np.sum((f(x[i]) + f(x[i+1]))*d/2 for i in range(0, len(x)-1)) 

### Midpoint Method

The midpoint method, like the Trapezoidal method, is very similar to the Riemann sums method, except, while using the midpoint method, we are computing the sums of the “middle” of the rectangle. That is, our summation looks as follows:

$$\sum_{i=1}^{n-1}{((f(x_i)+f(x_{i+1}))/2)d}$$ where $n$ is the number of elements in the our interval.

An example of what this method may look like in Python/Sage is:

 f = lambda x: x**3 # some function f a, b = (0, 3) # some interval d = 0.001 # some small delta value x = np.arange(a, b, d) np.sum((f((x[i] + x[i+1])/2)*d) for i in range(0, len(x)-1)) 

# Numerical Integration Methods-Motivation

I’m sure most of us have at least some experience with integrals being tricky and hard to compute. Whether it be trig substitution or some other method integrals can be very difficult. There are also functions, real functions that do not have an antiderivative, functions like:

$e^{-x^2}$ or $\frac{sin x}{x}$

However, there are ways to evaluate these integrals, and these are contained in the numerical methods. It is worth studying these methods one, to find the area under the cover for these certain functions, and two, to test the accuracy of these methods. There are certain methods in this chapter which my teammates will tell you more about.