Towards more robust and explainable neural nets: Beyond the black box
Deep neural nets have achieved ever greater successes in recent years, but there is still a pervasive sentiment in our field that "they are black boxes" and "we will never understand them." A thorough understanding is elusive, but we can conceptualize what a neural net is doing using familiar tools and concepts, such as probability distribution functions, fitting functions, and Bayesian statistics. We can leverage results from the field of "Adversarial AI" to study important failure modes, and learn how to mitigate them by addressing the underlying cause of failure. By training neural networks better, using them better, and evaluating their performance better, we may soon be able to apply a powerful set of tools to astronomical problems with rigor and transparency.