# Iterated Linear Maps in the Plane

The next lab that has caught my attention and would like to explore further is the second to last chapter in our book: Iterated Linear Maps in the Plane. I like this one because, although very similar to our first lab, is very graphical and includes matrix operations.

Before, we were iterating the function $f(x) = ax+b$. This sort of iteration is called an affine map. In this lab we will be doing linear maps or maps where the constant term $b = 0$. Further, because we are in the plane, our function will have vector valued inputs and will have vector-valued output. So our function will look something like:

$$f(x, y) = (a_{11}x + a_{12}y, a_{21}x + a_{22}y)$$

Or similarly, we can write our equation:
$$\left({} \begin{array}{c} x_{n+1} \\{} y_{n+1} \end{array} \right){} = f(x_n, y_n) = A \left({} \begin{array}{c} x_n \\{} y_n \end{array} \right){}$$

where $A = \left({} \begin{array}{c} a_{11} & a_{12} \\{} a_{21} & a_{22} \end{array} \right){}$.

Some questions this lab seeks to answer are similar to our first lab: it will ask us to try some different variations of our matrix $A$ and/ or our initial values and see if we can notice a pattern.

# Numerical Integration Analysis — Data!

We wanted to follow up with a post that contains a dump of our data. This includes, essentially, our percent error from the expected value of the given test functions:

• $\cos{x}$ over [0, $\pi{}$]
• $2x + 1$ over [0, 1]
• $4-x^2$ over [0, 2]
• $5x^3 – 6x^2 + 0.3x$ over [-1, 3]
• $x^3$ over [-1, 3]
• $x^3 -27x^2 + 8x$ over [0, 3]

We tested 5 different deltas (rectangle widths), $dx$, namely, $0.1$, $0.01$, $0.001$, $0.0001$, $0.00001$. But we are not going to put tables for each method and each delta; it’s just too much. However, we will do the first delta ($0.1$) and the last delta ($0.00001$).

### Summary of Methods for $\cos{x}$ over [0, $\pi{}$]

Method Delta Percent Error
Trapezoidal $0.100000$ -0.33364
Trapezoidal $0.000010$ -0.00000
Midpoint $0.100000$ -0.20893
Midpoint $0.000010$ -0.00000
Simpsons $0.100000$ 0.05475
Simpsons $0.000010$ 0.00000
Left Rectangle $0.100000$ -4.97995
Left Rectangle $0.000010$ -0.00050
Right Rectangle $0.100000$ 4.31267
Right Rectangle $0.000010$ 0.00050

### Summary of Methods for $2x + 1$ over [0, 1]

Method Delta Percent Error
Trapezoidal $0.100000$ -14.50000
Trapezoidal $0.000010$ -0.00150
Midpoint $0.100000$ -14.50000
Midpoint $0.000010$ -0.00150
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ 0.00000
Left Rectangle $0.100000$ -10.00000
Left Rectangle $0.000010$ -0.00100
Right Rectangle $0.100000$ -19.00000
Right Rectangle $0.000010$ -0.00200

### Summary of Methods for $4-x^2$ over [0, 2]

Method Delta Percent Error
Trapezoidal $0.100000$ -0.42813
Trapezoidal $0.000010$ -0.00000
Midpoint $0.100000$ -0.33906
Midpoint $0.000010$ -0.00000
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -3.81250
Left Rectangle $0.000010$ -0.00038
Right Rectangle $0.100000$ 2.95625
Right Rectangle $0.000010$ 0.00037

### Summary of Methods for $5x^3 – 6x + 0.3x$ over [-1, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -16.93086
Trapezoidal $0.000010$ -0.00181
Midpoint $0.100000$ -17.10882
Midpoint $0.000010$ -0.00181
Simpsons $0.100000$ -0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -7.67699
Left Rectangle $0.000010$ -0.00078
Right Rectangle $0.100000$ -26.18473
Right Rectangle $0.000010$ -0.00284

### Summary of Methods for $x^3$ over [-1, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -14.87537
Trapezoidal $0.000010$ -2.44034
Midpoint $0.100000$ -15.01091
Midpoint $0.000010$ -2.44034
Simpsons $0.100000$ -2.43902
Simpsons $0.000010$ -2.43902
Left Rectangle $0.100000$ -8.68293
Left Rectangle $0.000010$ -2.43966
Right Rectangle $0.100000$ -21.06780
Right Rectangle $0.000010$ -2.44102

### Summary of Methods for $x^3 – 27x^2 + 8x$ over [0, 3]

Method Delta Percent Error
Trapezoidal $0.100000$ -9.88570
Trapezoidal $0.000010$ -0.00103
Midpoint $0.100000$ -9.97363
Midpoint $0.000010$ -0.00103
Simpsons $0.100000$ 0.00000
Simpsons $0.000010$ -0.00000
Left Rectangle $0.100000$ -5.08032
Left Rectangle $0.000010$ -0.00051
Right Rectangle $0.100000$ -14.69108
Right Rectangle $0.000010$ -0.00154

Of course, we must mention that there is some rounding in the percent errors. Simpsons, Midpoint, and Trapezoidal methods are not perfect.

# Numerical Integration Analysis

Now that we have a number of numerical methods implemented, we want to compare them to see which method is best and in what circumstances.

We have a few test functions we were trying these methods over. Namely the following:

• $\cos{x}$ over [0, $\pi{}$]
• $2x + 1$ over [0, 1]
• $4-x^2$ over [0, 2]
• $5x^3 – 6x^2 + 0.3x$ over [-1, 3]

### Summary of Methods for $\cos{x}$ over [0, $\pi{}$]

Method Delta Percent Error
Midpoint $0.1$ -0.208927
Simpsons $0.1$ 0.054748
Right Rectangle $0.1$ 4.31267
Riemann $0.1$ 5.020046
Trapezoidal $0.1$ -0.33364
Left Rectangle $0.1$ -4.979954

### Summary of Methods for $5x^3 – 6x^2 + 0.3x$ over [-1, 3] (*)

Method Delta Percent Error
Midpoint $0.00001$ -0.00181
Simpsons $0.00001$ 0.00000
Right Rectangle $0.00001$ -0.00284
Riemann $0.00001$ -0.00103
Trapezoidal $0.00001$ -0.00181
Left Rectangle $0.00001$ -0.00078

Between these two sets of data, Let’s take a closer look at the magnitudes of the percent errors to see which method is more correct and then rank them.

Looking back at the first table, we can fairly easily tell that Simpsons rule is the best and Riemman sums was the worst. A little more difficult to pull out an order, so we made the computer compute the order: Simpsons, Midpoint, Trapezoidal, Right Rectangle, Left Rectangle, Riemann.

And same for the second table, we can easily see Simpsons was the best and Right Rectangle was the worst. And the order: Simpsons, Left Rectangle, Riemann, Trapezoidal, Midpoint, Right_rectangle.

Remember, when looking at the order between these two functions, we cannot say anything about each of the methods because we are changing two things in the comparison (the width of the rectangles summed and the function).

Now let’s take a closer look at the overall most correct method for all functions for each delta. That is, we will be varying the delta and looking at which method was the best.

Looking at a delta of $0.1$, we see that the Simpsons method is the most accurate for all our test functions. Interestingly though, with a delta of $0.01$ and $0.001$, the Midpoint method is better than Simpsons for $\cos{x}$, but Simpsons is still better for the other three. Moving to a delta of $0.0001$, we see Simpsons method is best for all functions again and remains to be for $0.00001$ as well.

So far, we can see that Simpsons method is amazing at single variable integration. But we will want to know how good? What’s the relative rates of accuracy increase between the methods?

Look for a follow up post where we post more data about our analysis and try to answer the above questions.

* Simpsons method looks to be $0.000$ here; this is the result of some rounding for presentation. The actual value is really close to zero but not quite zero.

# Numerical Integration Methods

As a follow up on our motivation, I will be introducing a few of the methods that we will be testing. Namely, this post will introduce Riemann sums, Trapezoidal sums, and the Midpoint method. I wanted to put these three together because they are very similar in computation (we do not know how similar they are in accuracy though).

### Riemann Sums

The Riemann sums method is one of the simplest methods to compute definite integrals. All this method does is sum the evaluation of the function at some point, $x_i$, multiplied by some small value, $d$. This gives us the following equation:

$$\sum_{i=1}^{n}{f(x_i)d}$$ where $n$ is the number of elements in the range of our interval.

An example of what this may look like using Python/Sage code:

 import numpy f = lambda x: x**3 # Some function f a, b = (0, 3) # Some interval d = 0.001 # some small delta value numpy.sum((f(x)*d) for x in numpy.arange(a, b, d))) 

Riemann Sums Method. Src: Wikipedia

### Trapezoidal Sums

The Trapezoidal Sums method is similar to the Riemann sums method as in it computes the sums of the function evaluated at some point, $x_i$. However, each term that is summed is the average between two points. That is, our sum looks like the following:

$$\sum_{i=1}^{n-1}{(f(x_i)+f(x_{i+1})d/2}$$ where $n$ again is the number of elements in the range of our interval.

An example of what this may look like in Python/ Sage code:

 f = lambda x: x**3 # Some function f a, b = (0, 3) # Some interval d = 0.001 # Some small delta value x = np.arange(a, b, d) np.sum((f(x[i]) + f(x[i+1]))*d/2 for i in range(0, len(x)-1)) 

### Midpoint Method

The midpoint method, like the Trapezoidal method, is very similar to the Riemann sums method, except, while using the midpoint method, we are computing the sums of the “middle” of the rectangle. That is, our summation looks as follows:

$$\sum_{i=1}^{n-1}{((f(x_i)+f(x_{i+1}))/2)d}$$ where $n$ is the number of elements in the our interval.

An example of what this method may look like in Python/Sage is:

 f = lambda x: x**3 # some function f a, b = (0, 3) # some interval d = 0.001 # some small delta value x = np.arange(a, b, d) np.sum((f((x[i] + x[i+1])/2)*d) for i in range(0, len(x)-1)) 

# Numerical Integration

Numerical Integration is a lab exploring numerical methods for computing integrals. That is, using a computer program or calculator to find an approximation to the integral of some function $f(x)$.

Of course, because we are talking about integration we can’t go very far without the fundamental theorem of calculus: $F(x) = \int_a^x{f(t)dt}$. Further, in this lab we will talk about a few methods for numerically computing integrals, namely: Rectangle/ Riemann Sum, Trapezoidal Sum, Parabola/ Simpson’s Rule, just to name a few.

In Calculus courses, we are usually given “nice” functions, functions that are “easy” to solve or do not require numerical methods to compute. However, the set of functions that are “nice” is very small. Thus, we must resort to numerical methods. For example, there is no elementary antiderivative to the following integral:

$$\int{e^{e^x}dx}$$

But we can approximate it using one of the methods that we explore in this lab.

I was initially drawn toward this lab because other courses have introduced numerical integration and I have used other numerical methods by hand and wanted to further explore the topic by automating it and exploring the different methods.

# Kenny and Epidemic Models

My name is Kenny. I’m an Applied Mathematics student at Boise State University, with other interests in Computer Science and programming.

An area of math that I’m interested in is differential equations and modeling. Using differential equations we can model a number of natural phenomenons. Namely, predator-prey, competing predator-predator, mass on  spring, and epidemic models. The last one may be modeled, simply, by the system of differential equations:

\begin{align*}
\frac{dS}{dt} &= -\beta{}SI \\
\frac{dI}{dt} &= \beta{}SI-\gamma{}I \\
\frac{dR}{dt} &= \gamma{}I
\end{align*}
Where $S(t)$ is a function of individuals not yet infected, $I(t)$ a function of infected individuals, $R(t)$ a function of recovered individuals and $\beta{}$ and $\gamma{}$ are constants of infection and recovery, respectively.