Nhat Ho

Coming Soon! 

Assistant Professor
Department of Statistics and Data Sciences
The University of Texas at Austin

Other affiliations:
Core member, Machine Learning Laboratory

Email: minhnhat@utexas.edu
Office: WEL 5.242, 105 E 24th Street Austin, TX 78712

Brief Biography


I am currently an Assistant Professor of Statistics and Data Sciences at the University of Texas at Austin. I am also a core member of the Machine Learning Laboratory and senior personnel of the Institute for Foundations of Machine Learning (IFML). Before going to Austin, I was a postdoctoral fellow in the Electrical Engineering and Computer Science (EECS) Department where I am very fortunate to be mentored by Professor Michael I. Jordan and Professor Martin J. Wainwright. Going further back in time, I finished my Phd degree in 2017 at the Department of Statistics, University of Michigan, Ann Arbor where I am very fortunate to be advised by Professor Long Nguyen and Professor Ya'acov Ritov.

Research Interests


A central theme of my research focuses on four important aspects of complex and large-scale models and data:

  • (1) Interpretability, Efficiency, and Robustness of deep learning and complex machine learning models, including Transformer architectures, Deep Generative Models, Convolutional Neural Networks, etc.;

  • (2) Scalability of Optimal Transport for machine learning and deep learning applications;

  • (3) Stability and Optimality of optimization and sampling algorithms for solving complex statistical machine learning models;

  • (4) Heterogeneity of complex data, including Mixture and Hierarchical Models, Bayesian Nonparametrics, etc.

For the first aspect (1), we utilize insight from statistical machine learning modeling and theories, Hamilton-Jacobi partial differential equation (PDE) to understand deep learning and complex machine learning models. An example of our works (T.1, T.2) includes using mixture and hierarchical models to improve the redundancy in Transformer, a recent state-of-the-art deep learning archiecture for language and computer vision applications. Furthermore, we also utilize Fourier Integral Theorem and its generalized version (T.3), a beautiful result in mathematics, to improve the interpretability and performance of Transformer. The Fourier Integral Theorem is also used in our other works to build estimators in other machine learning and statistics applications (T.4). Finally, we also develop a Bayesian deconvolution model (T.5) to understand Convolutional Neural Networks.

For the second aspect (2), we focus on improving the scalability and curse of dimensionality of optimal transport in deep learning applications, such as deep generative model, domain adaptation, etc. For the curse of dimensionality, we propose several new variants of sliced optimal transport (OT.1, OT.2) to not only circumvent the curse of dimensionality of optimal transport but also improve the sampling scheme and training procedure of sliced optimal transport to select the most important directions. For the scalability, we propose new minibatch frameworks (OT.3, OT.4) to improve the misspecified matching issues of the current minibatch optimal transport in the literature. Furthermore, we also develop several optimization algorithms with near optimal computational complexities (OT.5, OT.6, OT.7) for approximating the optimal transport and its variants.

For the third aspect (3), we study the interplay and trade-off between the instability, statistical accuracy, and computational efficiency of optimization and sampling algorithms (O.1) for solving parameter estimation in statistical machine learning models. Based on these insights, we provide a rigorous statistical behaviors of the Expectation-Maximization (EM) (O.2, O.3) algorithm for solving mixture models and of the factorized gradient descent for solving a class of low-rank matrix factorization problems (O.4). Finally, in the recent work, we propose the exponential schedule for gradient descent (O.5) and demonstrate that this algorithm obtains the optimal linear computational complexity for solving parameter estimation in statistical machine learning models.

For the fourth aspect (4), an example of our works includes the statistical and geometric behaviors of latent variables in mixture and hierarchical model via tools from optimal transport, quantization theory, and algebraic geometry. For example, we demonstrate that the convergence rates of the maximum likelihood estimation for finite mixture models and finite mixture of experts are determined by the solvability of some system of the polynomial equations, which is one of the key problems in algebraic geometry (M.1, M.2, M.3, M.4). These theories with the convergence rates of the MLE also lead to a novel model selection procedure (M.5) in finite and infinite mixture models.

Apart from these topics, we also study Bayesian inference and asymptotics from new perspectives. For example, we utilize diffusion process to establish the posterior convergence rate of parameters in statistical models (E.1) or employ Fourier Integral Theorem to establish the posterior consistency of Bayesian nonparametric models (E.2).

Codes


The official Github link for codes of research papers from our Data Science and Machine Learning (DSML) Lab is: https://github.com/UT-Austin-Data-Science-Group.

Editorial Boards of Journals


Area Chairs of Conferences in Machine Learning


Recent News


  • [01/2023] I am glad to serve as an Associate Editor of the Electronic Journal of Statistics , one of the top journals in Statistics and Data Science

  • [01/2023] Three papers (1, 2, 3) are accepted to International Conference on Learning Representations (ICLR) and International Conference on Artificial Intelligence and Statistics (AISTATS)

  • [11/2022] The paper " Instability, computational efficiency, and statistical accuracy " , coauthored with Raaz Dwivedi, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan, and Bin Yu was accepted to Journal of Machine Learning Research (JMLR) subject to minor revision

  • [09/2022] Six papers (1, 2, 3, 4, 5, 6) were accepted to Conference on Neural Information Processing Systems (NeurIPS), 2022

  • [09/2022] The paper " Bayesian consistency with the supremum metric", coauthored with Stephen Walker, was accepted to Statistica Sinica

  • [08/2022] I am glad to serve as an Area Chair of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2023

  • [08/2022] The paper " Convergence rates for Gaussian mixtures of experts " , coauthored with Chiao-Yu Yang and Michael I. Jordan, was accepted to Journal of Machine Learning Research (JMLR)

  • [05/2022] Six papers (1, 2, 3, 4, 5, 6) were accepted to International Conference on Machine Learning (ICML), 2022

  • [05/2022] The paper " On the efficiency of entropic regularized algorithms for optimal transport " ,coauthored with Tianyi Lin and Michael I. Jordan, was accepted to Journal of Machine Learning Research (JMLR), 2022

  • [02/2022] The paper " On the complexity of approximating multi-marginal optimal transport" ,coauthored with Tianyi Lin, Marco Cuturi, and Michael I. Jordan, was accepted to Journal of Machine Learning Research (JMLR), 2022

  • [01/2022] Four papers (1, 2, 3, 4) were accepted to International Conference on Artificial Intelligence and Statistics (AISTATS), 2022

Selected Publications on Theory (Hierarchical and Mixture Models, (Non)-Convex Optimization, Optimal Transport, (Approximate) Bayesian Inference, etc.)


Selected Publications on Method and Application (Generative Models, Optimal Transport, Transformer, Convolutional Neural Networks, etc.)


  • A primal-dual framework for transformers and neural networks. ICLR, 2023 (Spotlight).
    Tan Minh Nguyen, Tam Minh Nguyen, Nhat Ho, Andrea L. Bertozzi, Richard Baraniuk, Stanley Osher.

  • Improving Transformer with an admixture of attention heads . Advances in NeurIPS, 2022.
    Tam Nguyen*, Tan Nguyen*, Hai Do, Khai Nguyen, Vishwanath Saragadam, Minh Pham, Khuong Nguyen,Stanley Osher, Nhat Ho.

  • Efficient Transformer with compressive sensing . Under review.
    Tan Minh Nguyen, Tho Tran Huu, Tam Minh Nguyen, Minh Pham, Stanley Osher, Nhat Ho.