Day 
Speaker/Activity 
26Aug2014

Organizational meeting

05Sep2014

Overview of the M.Sc. thesis

12Sep2014

Dr. Jaechoul Lee
Extreme Value Theory and Its Application in Climatology
Weather/climate extreme events have profound societal, ecological, and economic impacts. In 2013, there were nine weather and climate related disaster events with losses exceeding one billion dollars each across the United States (US Billiondollar Weather/Climate Disaster report by NOAA's NCDC). As the probabilistic features of these extreme events are intrinsically nonGaussian, we need to use nonGaussian based models that take into account special features pertinent to extreme events.
The first part of this talk is a brief overview of extreme value theory. The generalized extreme value and generalized Pareto distributions will be discussed as an appropriate probability distribution for modeling extreme climate events. The second part is my research on an application of extreme value theory. This research develops trend estimation techniques for United States maximum and minimum temperatures. Statistical models with extreme value and changepoint features are used to estimate trends and their uncertainties. The results show that maximum temperatures are not greatly changing  interestingly, there are many locations that show some cooling. In contrast, the minimum temperatures show significant warming.

19Sep2014

Dr. Zach Teitler
Recent advances in Waring rank and apolarity
Waring rank is a measure of complexity of polynomials related to sumsofpowers expressions and to a number of applications such as interpolation problems, blind source separation problems in signal processing, mixture models in statistics, and more. We review some recent advances having in common the use of apolarity, a sort of reversed version of differential equations in which one considers the set of differential equations that have a given function as a solution.
1. Apolarity is applied to describe criteria for a polynomial to be expressible as a sum of functions in separate sets of variables, possibly after a change of coordinates. This is related to separationofvariables techniques in differential equations and to topology (criteria for a manifold to be decomposable as a connected sum). This is joint work with Buczynska, Buczynski, and Kleppe.
2. The set of sumofpowers decompositions of a monomial is described. A corollary is a necessary and sufficient condition for a monomial to have a unique such decomposition, up to scaling the variables. This is joint work with Buczynska and Buczynski.
3. One generalization of monomials is the family of polynomials that completely factor as products of linear factors, geometrically defining a union of hyperplanes. Waring ranks of hyperplane arrangements are determined in the case of mirror arrangements of finite reflection groups satisfying a technical hypothesis which includes many cases of interest. This is joint work with Woo.
If time permits, ongoing work will be described, including geometric lower bounds for generalized Waring rank, apolarity of general hyperplane arrangements, and a number of other open questions.

26Sep2014

Research journal articles

03Oct2014

Dr. Donna Calhoun
Adaptive mesh refinement for solving partial differential equations on logically Cartesian, multiblock domains
Adaptive mesh refinement is a widely used strategy for
solving partial differential equations (PDEs) using finite volume,
finite element or finite difference methods. By dynamically
allocating grid resources only in regions of a computational domain
where the solution features are of most interest, we can realize
significant savings in computational effort and cost. In this talk,
we will introduce block structured adaptive mesh refinement (AMR) as
first described by Berger and Oliger (Journal of Computational
Physics, 1984), then survey existing software frameworks for
implementing this approach to AMR, discuss the challenges in coupling
AMR with sophisticated spatial schemes and time stepping strategies,
and finally describe our own efforts to develop adaptive mesh codes
that can easily coupled with existing single grid PDE solvers. I will
focus on finite volume methods for equations that model wavelike
behavior, including advection, gas dynamics, and shallow water wave
equations.

10Oct2014

Dr. Jodi Mead
Inverse Methods: Combining Observational Data with Mathematical Models
Inverse methods are used to address several issues that arise when using mathematical models to describe or predict events and situations in science and engineering. The first is one of inconsistency  observational data often do not match model results. The second is that mathematical models typically require additional information, and the observational data may not be sufficient to resolve it. The third difficulty is one of instability in that when using data to inform a model, slight changes in data produce drastically different results. I've experienced all of these difficulties while working on problems in oceanography, hydrology, geophysics and image processing. Through these experiences I've developed an approach to obtain more meaningful results by incorporating statistics about the data and models. I will discuss this methodology and future directions of it with opportunities for student research.

17Oct2014

Research journal articles

24Oct2014

Dr. Inanc Senocak, Mechanical Engineering, BSU
Numerical simulation of fluid flows: An engineer's view
Fluid flow is ubiquitous in our everyday life. Whether it is a draining sink or an aircraft, governing equations are the same. However, the interplay between inertial and viscous effects give rise to many interesting problems thanks to the inherent nonlinearity in the governing equations. As the inertial effects dominate, the flow field goes through a transition to a turbulent state that has unsteady, random and chaotic behaviour. In a turbulent state, the flow field can be characterized by many scales in space and time. Because most practical fluid flow problems are turbulent, scientist and engineers have spent many years on the socalled turbulence modeling problem. In this talk, I'll present an overview of the origin of turbulence closure in fluid flow equations and discuss three approaches to model and understand turbulence and its relation to numerical methods used in the simulations. The discussion will continue with an overview of the immersed boundary method on a Cartesian mesh, and its extension to simulate turbulent flows over complex geometry. I'll conclude by discussing ongoing research in wind forecasting over complex terrain using graphics processing units for fast calculations.

31Oct2014

Dr. Jennifer Kacmarcik, Dept. of Mathematics, University of Montana
An Interesting Family of Symmetric Polynomials in Three Variables
My work in several complex variables has prompted me to ask a very basic question about polynomials: Suppose Q(x,y,z) is a homogeneous polynomial in three variables of degree d in which all degree d monomials appear with nonzero coefficient. Let S=x+y+z. What is the minimum number of terms in P=SQ? What properties do these minimal polynomials have? What if we require P to also be symmetric? In this talk I will answer these questions. We will see that the coefficient of these minimal polynomials have some very interesting numbertheoretic properties.

07Nov2014

Dr. Marion Scheepers
The ciliate decryptome
Ciliates are single celled organisms with on board cryptology technology, called the ciliate decryptome. The decryptome is an important component in ciliate genome maintenance. Biologists, with Computer Scientists, have proposed at least two different models for the operations performed by the decryptome. The hypotheses of one of these models imply that the decryptome is a universal Turing machine. It has also been proven that the fundamental decryptome operations observed in ciliates are in fact common to these two models. We discuss recent mathematical results inspired by decryptome operations common to the two models.

14Nov2014

Dr. Leming Qu (Chair)
High Dimensional Copula Density Estimation by Archimedean Copula Mixture Model
As a stochastic dependence modeling tool beyond the classical normal distribution model, Copula is widely used in Financial Engineering and Risk Management. Existing high dimensional Copula applications mostly assume parametric copula models. Existing nonparametric Copula estimation methods are mainly applicable only to bivariate Copulas. High Dimensional nonparametric Copula estimation is challenging due to the curse of dimensionality.
A high dimensional nonparametric Copula density estimation method is proposed here. It is based on (1) Archimedean Copula mixture model, (2) maximum penalized likelihood estimation, (3) shrinkage of small mixture proportions to achieve dimension reduction.
Archimedean copulas are popular in practice because they allow modeling dependence in arbitrarily high dimensions with only one parameter which governs the strength of dependence. Archimedean Copula mixture model allows more flexibility in high dimensions than a single Archimedean copula. A L1type sparsity promoting penalty term is imposed on the mixture proportions in addition to the constraints of nonnegativity and summing to one. A fast gradientbased algorithm is proposed to maximize the resulting constrained penalized likelihood function, and compared with the expectationmaximization algorithm. By maximizing the penalized likelihood function, mixture components with small weights can be removed by a thresholding rule (shrinkage operator) and the remaining parameters are estimated. In such a way, the effective dimensionality of the problem is greatly reduced for high dimensional problems. Numerical simulation is carried out to study the finite sample performance of the proposed approach. The proposed method is applied to a stock market data set to investigate the stocks’ dependence structure and comovement.

21Nov2014

Dr. Uwe Kaiser (Associate Chair)
From Quantum Computation to Quantum Topology
The talk gives an introduction into models of classical respectively quantum computation as information processing. Quantum information processing uses wave interference and entanglement in informationally isolated systems. I will briefly discuss the mathematical description of quantum information processing (the axioms of quantum mechanics) by unitary operators acting on complex vector spaces. An example for the resulting apparent improvement in computational power of quantum over classical computation is described. Quantum information processing systems involve the wave functions of composite system and the statistics of the particle exchange of identical particles. Understanding particle exchange is been known for a long time to be related to the topology of particles moving in space. Interesting examples are so called anyons, quasiparticles moving in 2space and known to appear e.g. in the fractional quantum Hall effect. The mathematical model of anyons is topological quantum field theory, the basic structure of Quantum Topology.

05Dec2014

Student presentations
Stuart Nygard
A Survey of the Density Topology (Slides)
The study of continuous functions is fundamental to analysis. Of course, not all functions are continuous. Since continuity provides us with many useful theorems, is there any way to extend those theorems to functions which are "almost continuous?" What does it mean for a function to be almost continuous? I will discuss the Density Topology, which is one way of making sense of almost continuous functions. The density topology not only gives us tools for working with almost continuous functions, it also has nice topological properties. We will discuss a few, including separability and firstcountability.
Nathan Schmidt
Effective state, Hawking radiation and quasinormal modes for Kerr black holes (Paper)
The nonstrictly continuous character of the Hawking radiation spectrum generates
a natural correspondence between Hawking radiation and black hole (BH) quasinormal
modes (QNM). In this work, we generalize recent results on this important issue
to the framework of Kerr BHs (KBH). We show that also for the KBH, QNMs can be
naturally interpreted in terms of quantum levels. Thus, the emission or absorption of a
particle is in turn interpreted in terms of a transition between two different levels. At the
end of the paper, we also generalize some concepts concerning the “effective state” of a
KBH.
Heather Wilber
Characterizing Uncertainty: Defining Distances on Intuitionistic Fuzzy Sets (Abstract, Paper)

12Dec2014

Student presentations
Monica Agana
The Koch Snowflake Curve (Abstract, Article, Slides)
Waruni Wijayasinghe
A mathematical model for lung cancer regression (Article)
Lung cancer has been identified as a deadly illness if not detected early and not treated promptly. The main cause of lung cancer, as we all are aware is active smoking. But, recent researches have found out that in USA, each year about 3400 non smoking adults suffer from this disease as a result of the exposure to passive smoking. Various kind of treatments have been identified and it would be interesting to investigate what treatments affect more and how they should be administered to the patient. Meanwhile mathematicians have made an effort to answer these issues. I will present, how M.Kolev, S. Nawrocki and B.Zubik Kowal have approached this situation and how well the clinical data and numerical data have been correlated in their admirable effort.
Nicholas Lines
Cryptanalysis of NTRU with two public keys (Article)
Latticebased cryptography has gained popularity recently due to its apparent resistance to quantum computing attacks. The first acceptable latticebased cryptosystem was NTRU, proposed in 1996. Since its birth, the security of NTRU variants has been analyzed to identify possible improvements to the algorithm. The paper we consider, "Cryptanalysis of NTRU with two public keys" by Abderrahmane Nitaj, examines the case where two distinct (but related) public and private key pairs are generated and employed in encryption, and presents an attack that is successful in this case. We will provide relevant background information first and then discuss the paper's method and results.
