Skip to Main Content
Mobile Menu

Math Department Colloquium

The colloquium features recent research in the mathematical and statistical sciences. 

The colloquia are scheduled for Tuesdays from 4pm-5pm in ILC 402, unless noted otherwise. Refreshments will be served in MB226 from 2pm-3pm before each talk.

If you wish to be added to the department colloquium mailing list, or if you wish to give a colloquium talk, please contact the organizer Grady Wright  (

Archive of past math department colloquium abstracts

Schedule for 2019–2020

September 10 - Aykut Satici, Boise State University

Title: Exploiting sum-of-squares programming for the analysis and design of robotic manipulators

Abstract:  The main theme of this work is the use of a certain type of convex optimization to analyze and design robotic manipulators. On the analysis front, we provide a general framework to determine inner and outer approximations to the singularity free workspace of robotic manipulators. A similar framework is utilized for optimal dimensional synthesis of robotic manipulators with respect to their kinematic and dynamic properties.  This framework utilizes the sum-of-squares optimization technique, which is numerically implemented by semidefinite programming. In order to apply the sum-of-squares optimization technique, we convert the trigonometric functions in the kinematics of the manipulator to polynomial functions with an additional constraint. Sum-of-squares programming is shown to promise advantages as it can provide globally optimal results up to machine precision and scales better with respect to the number of design variables than other methods which can obtain globally optimal solutions.

September 17 - Nicholas J. Horton, Amherst College

Title: Multivariate thinking and the introductory statistics and data science course: preparing students to make sense of a world of observational data

Abstract: We live in a world of ever expanding “found” (or observational) data. To make decisions and disentangle complex relationships, students need a solid background in design and confounding. The revised Guidelines for Assessment and Instruction in Statistical Education (GAISE) College Report enunciated the importance of multivariate thinking as a way to move beyond bivariate thinking. But how do such learning outcomes compete with other aspects of statistics knowledge (e.g., inference and p-values) in introductory courses that are already overfull. In this talk I will offer some reflections and guidance about how we might move forward, with specific implications for introductory statistics and data science courses.

This will be a virtual presentation using zoom; you can view it at or come watch in ILC 402.

September 24 - Donna Calhoun, Boise State University

Title: The Serre-Green-Naghdi equations for modeling shallow, dispersive geophysical flows

Abstract: The depth-averaged shallow water wave equations are commonly used to model flows arising from natural hazards. The GeoClaw code, developed by D. George, R. J. LeVeque, M. J. Berger, K. Mandli and others is one example of a depth-averaged flow solver now widely used for modeling tsunamis, overland flooding, debris flows, storm surges and so on. Generally, depth averaged flow models show excellent large scale agreement with observations and can thus be reliably used to predict whether tsunamis will reach distant coast lines, and if, so can give vital information about arrival times. However, for other types of flows, dispersive effects missing from the SWE model can play an important role in determining localized effects such as whether waves will overtop seawalls, or whether a landslide entering a lake will trigger tsunami-like behavior on the opposite shore. Because of the importance of these dispersive effects, several depth averaged codes include dispersive corrections to the SWE. One set of equations commonly used to model these dispersive effects are the Serre-Green-Naghdi (SGN)equations.

We will present our work to include dispersive correction terms into the GeoClaw extension of ForestClaw, a parallel adaptive library for Cartesian grid methods. One formulation of the SGN equations stabilizes higher order derivatives by treating them implicitly. As a result, a key component of an SGN solver is a variable coefficient Poisson solver. We will discuss our current work in developing both an iterative solver, based on multi-grid preconditioned BiCG-STAB solver (Scott Aiton, Boise State), and a direct solver based on the Hierarchical-Poincaré-Steklov (HPS) method developed by Gillman and Martinsson (2014). We will describe the SGN equations and provide an overview of their derivation, and then show preliminary results on uniform Cartesian meshes. Comparisons with the SGN solver in Basilisk (S. Popinet) and BoussClaw (J. Kim et al) will also be shown to verify our model.

October 1 - Katherine E. Stange, University of Colorado, Boulder

Title: Cryptography in the face of quantum computers

Abstract: When quantum computers are engineered to scale, quantum algorithms will be able to break our current cryptographic standards.  So what do we replace them with?  I’ll discuss two of the front-runners:  ring-learning-with-errors (based on lattices in number fields) and isogeny-based cryptography (based on elliptic curves).  I’ll describe the fundamental “hard problems” (without assuming much background in number theory) which we believe quantum (or classical) computers cannot solve efficiently.  I’ll explain a little about why we might believe that, and what we can do if we do.

October 8 - James P. Keener, University of Utah

Title: The mathematics of life:  making diffusion your friend

Abstract: Diffusion is the enemy of life.  This is because diffusion is a ubiquitous feature of molecular motion that is constantly spreading things out, destroying molecular aggregates.  However, all living organisms, whether single cell or multicellular have ways to use the reality of molecular diffusion to their advantage.  That is, they expend energy to concentrate molecules and then use the fact that molecules move down their concentration gradient to do useful things.

In this talk, I will show some of the ways that cells use diffusion to their advantage,  to signal, to form structures and aggregates, and to make measurements of length and size of populations.  Among the examples I will describe are signalling by nerves, cell polarization, bacterial quorum sensing, and regulation of flagellar molecular motors. In this way, I hope to convince you that living organisms have made diffusion their friend, not their enemy.

October 15 - William A. Bogley, Oregon State University

Title: Combinatorial Group Theory Treasures from the Non-aspherical Realm

Abstract:  In the 1950s it was shown that there are significant limits to what can be discovered about groups using algorithmic means alone. For families of groups described in terms of generators and relations, it can be difficult to work out answers to elementary questions such as which members of the family are finite, or which are trivial. I will discuss the role of planar and spherical diagrams along with the concept of “asphericity,” which provides a practical filter that can be used to isolate interesting cases. I will also report on recent work by Matthias Merzenich, who has employed computer-based recursive methods to construct previously unseen objects.

October 22 - Frank Giraldo, Naval Postgraduate School

Title: Efficient Time-Integration Strategies for Non-hydrostatic Atmospheric Models

Abstract: The Non-hydrostatic Unified Model of the Atmosphere (NUMA) is a compressible Navier-Stokes solver that sits inside of the U.S. Navy’s NEPTUNE weather model based on high-order continuous and discontinuous Galerkin (CG/DG) methods. Therefore, it is imperative that NUMA runs as efficiently as possible without sacrificing accuracy and conservation. One of the last places to squeeze out more performance is in the time-integration strategy used in the model. In this talk, I will review the various time-integration strategies currently available in NUMA such as: fully explicit methods with large stability regions, fully implicit methods, implicit-explicit methods, and multirate methods. With multirate methods, the idea is to partition the processes with different speeds (or stiffness) in some hierarchical way in order to use time-steps commensurate with the wave speed of each process. However, to gain all the benefits of this approach requires fully embracing this idea at the code level which then means that the time-integrators and spatial discretization methods have to be fully aware of each other (complicates the code). In this talk, I will discuss our recent results on time-integrators and possible preconditioning strategies for the implicit solvers and our experience shows that IMEX time-integrators are only competitive with explicit time-integrators if and only if the Schur complement is used.