Author Archives: Janae Korfanta

Numerical Integration Analysis Part 2

In addition to looking at the percent errors of different methods for integrating, our group also wanted to explore the effectiveness of the various methods for different types of functions. We were guided to explore an example of trigonometric, linear, quadratic, and cubic functions, but wanted to see if patterns emerged within these different types of functions and ultimately if a particular method of integration gives more accurate estimations given a specific type of function.

The actual data we collected, or a detailed summary of the data can be seen here. From this data we found that Simpson’s method is by far the best method of the five methods we explored, particularly for cubic functions. For symmetric functions such as trigonometric functions as well as quadratic functions, the midpoint method is also very accurate. Trapezoidal method provides an accurate estimation especially for small deltas, but has trouble with cubic formulas. The other Reimann sum methods provide over or underestimates depending on the type of function but, especially for larger deltas, are not as accurate as the other methods.

 

Numerical Integration Methods Part 2

Within our lab, we plan to explore, technically 6, different methods of numerical integration, though a couple of the methods are very similar or even just different variations of the same method.  While Luke’s post will tell you all about why numerical integration is important, and why you would want to have numerical ways to compute integrals, I will provide some of the important definitions to help you remember the specifics about the different types of numerical methods of integration and the different advantages and disadvantages each method has. I’ll be covering the left-hand sum, right-hand sum, and Simpson’s rule.

The left and right-hand sums are actually variations of the Riemann sum method. All methods approximate the curve by finding the area of rectangles that cover a similar area as the curve.  As Kenny will elaborate on in his definition of the Riemann sum, these methods are calculated by dividing the interval over which a given function is to be integrated, into subintervals which will serve as the base of the rectangles which will be formed. The height of the rectangles is determined by picking a point on the function. Then the areas of each rectangle is found and added together. The sum of the areas is the approximation of the integral.

Left-Hand Sum

The distinguishing factor for a specific type of sum is the point used from which the height is determined. In the left-hand sum, the left-hand end-point of the sub interval is used as to determine the height. This means the length of the rectangle is found by extending the point a the left-hand side of the subinterval to the function. A picture I found on Wikipedia helps illustrate this concept:

File:LeftRiemann2.svg

As can be seen in the illustration, for monotonically increasing functions, the left-hand sum approximation is an underestimate of the integral. For monotonically decreasing functions, this method provides and over-estimation.

The formula for this method is as follows:

$L_n = \sum_{i=0}^{n-1}f(x)d$

Where f(x) is the function we’re integrating and d is the width of the subinterval.

Right-Hand Sum

Conversely, the right-hand sum method, uses the right-hand point of the subintervals. And can be pictured as:

File:RightRiemann2.svg

With a formula of: $R_n = \sum_{i=1}^{n}f(x)d$

Where f(x) is the function we’re integrating and d is the width of the subinterval.

Wikipedia actaully had a nice summary if you’d like more explanation: http://en.wikipedia.org/wiki/Riemann_sum#Left_sum

Simpson’s rule

SImpson’s rule allows us to compute an integral using quadratic polynomials. This method, like the other methods, separates the area that is to be integrated into subintervals, but differs in the sense that it finds the area of parabolas that encompass the subinterval, by using quadratic polynomials which approximate the function.

Simpson’s rule is formally defined as: $\frac{(b-a)}{6}[f(a)+4f(\frac{a+b}{2})+f(b)]$ and it is very accurate when calculating integrals of polynomials to a cubic degree.

Wolrfram MathWorld has a very succinct description of Simpsons rule which can be found at: http://mathworld.wolfram.com/SimpsonsRule.html.

Also, an interesting fact from Wikipedia about Simpson’s rule, this rule is widely used by naval architects to numerically integrate hull offsets and cross-sectional areas to determine volumes and centroids of ships or lifeboats.

http://en.wikipedia.org/wiki/Simpson%27s_Rule

 

 

 

Randomized Response Surveys-Janae

I found the lab about Randomized Response Surveys, from chapter 6, interesting. This lab has the reader explore how to get accurate data from a survey and how to evaluate that data.

An important mathematical term introduced in this lab would be bias, which is the difference between an estimate’s expected value and the true value to be estimated. Expected value would actually be another term defined in this chapter, there is actually a fair amount of vocabulary.

A good example question from this lab is without doing any simulations, guess the general shape of the functional relation between Pr(Heads) for the penny and the SD of the estimate. How do you think the estimator will behave if Pr(Heads) is near 0? near 1? Record your guess in the form of a sketch of a graph of SD (theta) as a function of theta = Pr(Heads).

I think the general nature of statistics, having to adjust for unexpected circumstances and how you apply logic and patterns to those circumstances is what fascinates me about this lab.