Liam HodgkinsonLecturer in Statistics (Data Science) Office: 114 Peter Hall Building, Parkville, VIC, Australia |
I am a lecturer in Data Science at the School of Mathematics and Statistics at the University of Melbourne. Previously, I was a postdoctoral research scholar at University of California, Berkeley and the International Computer Science Institute (ICSI), under the supervision of Michael Mahoney. I obtained my Ph.D. from the University of Queensland, supervised by Prof Philip Pollett and Dr Ross McVinish.
My research interests lie primarily in probabilistic machine learning (including Bayesian methods), theory of deep learning, and robust training schemes for neural networks. I regularly make use of analytical probability theory (including the development of generalization bounds) to understand and develop new methodology in machine learning.
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Yaoqing Yang, Ryan Theisen, Liam Hodgkinson, Joseph E Gonzalez, Kannan Ramchandran, Charles H. Martin, Michael W. Mahoney
arXiv preprint, 2022. [arXiv].
The reproducing Stein kernel approach for post-hoc corrected sampling
Liam Hodgkinson, Robert Salomone, Fred Roosta
arXiv preprint, 2020. [arXiv].
Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers
Liam Hodgkinson, Umut Şimşekli, Rajiv Khanna, Michael W. Mahoney
39th International Conference on Machine Learning (ICML 2022), 2022. [arXiv].
Fat-Tailed Variational Inference with Anisotropic Tail Adaptive Flows
Feynman Liang, Liam Hodgkinson, Michael W. Mahoney
39th International Conference on Machine Learning (ICML 2022), 2022. [PMLR] [arXiv].
Stateful ODE-Nets using Basis Function Expansions
Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021. [arXiv].
Taxonomizing local versus global structure in neural network loss landscapes
Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney
35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021. [arXiv].
Noisy recurrent neural networks
Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021. [arXiv].
Fast approximate simulation of finite long-range spin systems
Ross McVinish, Liam Hodgkinson
The Annals of Applied Probability, 31(3), 1443-1473, 2020. doi:10.1214/20-AAP1624 [arXiv].
Stochastic Continuous Normalizing Flows
Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney
37th Conference on Uncertainty in Artificial Intelligence (UAI 2021), 2021. [UAI] [arXiv].
Geometric Rates of Convergence for Kernel-based Sampling Algorithms
Rajiv Khanna, Liam Hodgkinson, Michael W. Mahoney
37th Conference on Uncertainty in Artificial Intelligence (UAI 2021), 2021. [UAI] [arXiv].
Multiplicative noise and heavy tails in stochastic optimization
Liam Hodgkinson, Michael W. Mahoney
38th International Conference on Machine Learning (ICML 2021), 2021. [PMLR] [arXiv].
Shadow Manifold Hamiltonian Monte Carlo
Chris van der Heide, Liam Hodgkinson, Fred Roosta, Dirk Kroese
24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021), 2021. [PMLR].
Implicit Langevin Algorithms for Sampling From Log-concave Densities
Liam Hodgkinson, Robert Salomone, Fred Roosta
Journal of Machine Learning Research (JMLR), 22(136), 1-30, 2021. [JMLR] [arXiv].
Lipschitz Recurrent Neural Networks
N. Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, Michael W. Mahoney
Ninth International Conference on Learning Representations (ICLR 2021), 2021. [arXiv].
Normal approximations for discrete-time occupancy processes
Liam Hodgkinson, Ross McVinish, Philip K. Pollett.
Stochastic Processes and their Applications. 2020. doi:10.1016/j.spa.2020.05.016 [arXiv].
Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers
39th International Conference on Machine Learning (ICML 2022), 2022.
Stochastic Continuous Normalizing Flows
37th Conference on Uncertainty in Artificial Intelligence (UAI 2021), 2021.
Multiplicative noise and heavy tails in stochastic optimization
38th International Conference on Machine Learning (ICML 2021), 2021.
Multiplicative noise and heavy tails in stochastic optimization
Second Symposium on Machine Learning and Dynamical Systems, The Fields Institute, 2020. [Video].
Implicit Langevin algorithms for sampling from log-concave densities
20th INFORMS Applied Probability Society Conference, 2019.
Central limit theorems for dynamic random graph models
62nd Annual Meeting of the Australian Mathematical Society, 2018, (Awarded the B.H. Neumann Prize).
The long-term behaviour of an occupancy process
61st Annual Meeting of the Australian Mathematical Society, 2017.
Normal approximations for binary weakly interacting particle systems
3rd Melbourne-Singapore Probability and Statistics Forum, 2017, (In celebration of Andrew Barbour's 70th Birthday).
Approximations for Large-Scale Occupancy Processes in Discrete Time
60th Annual Meeting of the Australian Mathematical Society, 2016.
An Introduction to Stein's Method [Slides] [Proof of the Berry-Esseen Theorem]
Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers
39th International Conference on Machine Learning (ICML 2022), 2022.
Stochastic Continuous Normalizing Flows
37th Conference on Uncertainty in Artificial Intelligence (UAI 2021), 2021.
Multiplicative Noise and Heavy Tails in Stochastic Optimization
38th International Conference on Machine Learning (ICML 2021), 2021.
Normal Approximations for Occupancy Processes using Stein's method
The 40th Conference on Stochastic Processes and their Applications, 2018, (Awarded a Springer Best Poster Prize).