Breadcrumb

Training machines to understand the Universe

Dr. Daniel Masters, Jet Propulsion Laboratory
ABSTRACT –

Upcoming large-scale datasets in astrophysics will challenge our ability to effectively analyze and interpret the data. Surveys of the 2020s (e.g., Euclid, LSST, WFIRST and SPHEREx) will provide multiple deep views of the universe, each survey with its own observational characteristics such as noise levels, resolution, and wavelength coverage. How do we best interpret the millions to billions of galaxy images these surveys will obtain? What techniques from machine learning can be brought to bear to maximize the information from data that is noisy and inhomogeneous and for which we have limited training samples? We know that these datasets will encode huge amounts of physics about galaxy evolution and cosmology. I will discuss some promising approaches we have been using from non-linear dimensionality reduction (also known as manifold learning) to solve some of these challenges. I will also discuss open problems we are working on, including high-dimensional “extreme” deconvolution, as well as the possibility of incorporating domain knowledge with these techniques to increase their effectiveness.
 

Dr. Daniel Masters

Tags