The flows of climatically-important tracers, like heat and carbon, into the deep ocean shape Earth's climate change trajectory and are thus important to understand deeply enough that they can be adequately represented in ocean models. The largest of these flows can be "resolved" by a numerical ocean model and emerge naturally from solving the equations of fluid mechanics on its discretized grid, while those that occur on scales smaller than a grid cell need to be added in separately using "parameterizations". Parameterization is the process of distilling the essence of a physical process into a simple formula or algorithm that only depends on the "resolved" environment, thus giving us a self-consistent way of closing the model equations without leaving out any important physics. The process of parameterization is complicated by the turbulent nature of oceanic flows, which links fluctuations at the smallest scales (a few cm) to global currents (10,000 km) and makes the ocean inherently chaotic and unpredictable. Developing a parameterization requires both theoretical understanding of the process and a training data set for calibration; in-situ observations and high-resolution simulations are both commonly used. While many of the most important surface mixed layer processes are well understood and have been successfully parameterized, the equivalent processes in the bottom mixed layer remain poorly understood.
In my research, I develop theoretical models of deep ocean transport processes and test these ideas with available data and quasi-realistic simulations. I plan on using these theories, simulations, and observations to improve theoretical understanding of the global ocean circulation, develop new parameterizations for climate models, and inform future observational campaigns (such as the upcoming Bottom Layer Turbulence program).
At the high latitudes of the Arctic and Antarctic, cold, salty, and hence dense surface waters sink to fill the dark abyss of the deep ocean. Since seawater mass is conserved in the ocean, we know that these deep waters must return to the surface elsewhere. It is convenient to split up this global overturning circulation into two vertically stacked (and partially connected) cells with different dynamics: a wind-driven adiabatic (along density surfaces) upper cell and a mixing-driven diabatic (across density surfaces) lower cell. In the upper cell, dense waters form in the North Atlantic during winter storms and flow south along density surfaces towards the Southern Ocean, where their upwelling by a "residual circulation" along density surfaces is the result of a competition between winds and turbulent eddies. These newly upwelled waters are then lightened by an influx of freshwater from melting sea ice, and transported back northwards to close the upper cell. In the lower cell, on the other hand, the bottom waters that form off the coast of Antarctica are so dense that they do not outcrop to the surface anywhere else in the ocean, and thus must upwell "diabatically", i.e. by some process that changes their density. Oceanographers originally theorized that these dense abyssal waters upwelled by mixing vigorously with lighter waters. This theory was later challenged by observations: the turbulent mixing measured in the open ocean was too weak to drive the required upwelling. Eventually, observations revealed that turbulence levels increased dramatically near the jagged hills lining the sea floor, where we now know powerful internal waves break and mix up the water column. These new observations seem paradoxical at first: since the density of seawater also increases with depth, the logical conclusion was that deep water mixes preferentially with denser waters and thus the mixing results in sinking, not upwelling! This paradox is resolved by considering what happens right at the bottom of the ocean, where the mixing runs into the seafloor and causes a thin but vigorous burst of upwelling (see schematic to the right).
In my research, I use geophysical theory and numerical simulations to provide insights into the various processes described above and the global-scale circulations that emerge when they are combined.
My preferred scientific approach is to develop the simplest possible model that can be used to answer a given scientific question. The scientific problem that I am most passionate about– and which connects all of my research interests– is the long-term (10–10,000 year) evolution of Earth's climate. On these long timescales, the exchange of heat and carbon between the atmosphere and the ocean, and their distribution within the deep ocean, is a crucial control on Earth's climate. In the context of ongoing anthropogenic climate change, humans emit greenhouse gases, which weakens the atmosphere's ability to cool itself by radiating heat to space and causes the surface to warm. The atmosphere and surface of the planet responds to this warming effect (or "forcing"), causing a self-amplification of the warming in the net (a "positive feedback"). Thankfully, about 30% of this potential global warming is delayed for centuries as the ocean takes up much of the excess heat. These three key processes: greenhouse gas radiative forcing, radiative feedbacks, and ocean heat uptake, form the three key parameters in a widely used zero-dimensional "energy balance model" of Earth's climate. This extremely simple model can be tuned to more complicated "general circulation models" to yield remarkably accurate projections of global warming (see "Evaluating historical climate models" below) and form the basis of many climate-economic models of climate change (see my extremely simple one in the schematic on the right).
I am interested in the dynamical processes that determine the rate of ocean heat (and carbon) uptake in these conceptual energy balance models, how these processes might change over time, and any implications on long-term climate policy. I plan on continuing to develop ClimateMARGO.jl, an extremely simple climate-economic optimization model I created, and using it to answer fundamental questions about possible climate change trajectories, and how these depend on our understanding (or lack of understanding) of climate science.
How accurate are numerical climate models? On what basis– if any– can we trust their projections of future human-caused climate change? While weather prediction enjoys the benefit of nearly-real-time feedback ("The weather model predicted it would be sunny today, but it's raining!"), climate projections of human-induced climate change takes decades to emerge from the chaos of natural variability. Unfortunately, we do not have decades to wait: we need climate action now to avoid the most catastrophic climate impacts. Climate scientists typically use indirect methods to build confidence in climate model projections: developing better understand of the fundamental climate physics processes, reducing the uncertainty in the spread across different climate models, and improving the how well models reproduce historical observations.
Over the last few years, my collaborators and I have been taking a different approach: while we cannot just wait decades to find out whether today's climate projections come true, we can dig up decades-old projections and see how well they held up. (Spoiler: they did pretty well, as shown in the figure on the right). By combing the old IPCC archives and digitizing figures from historical papers, my collaborators and I have reconstructed a unique dataset of nearly every single climate model projection ever made. The dataset is rich and I am excited the about all of the otherwise unanswerable questions that it allows us to answer!