Nhat Ho

Coming Soon! 

Assistant Professor
Department of Statistics and Data Sciences
The University of Texas at Austin

Other affiliations:
Core member, Machine Learning Laboratory
Senior personnel, Institute for Foundations of Machine Learning (IFML)

Email: minhnhat@utexas.edu
Office: WEL 5.242, 105 E 24th Street Austin, TX 78712

Brief Biography

I am currently an Assistant Professor of Statistics and Data Sciences at the University of Texas at Austin. I am also a core member of the Machine Learning Laboratory and senior personnel of the Institute for Foundations of Machine Learning (IFML). Before going to Austin, I was a postdoctoral fellow in the Electrical Engineering and Computer Science (EECS) Department where I am very fortunate to be mentored by Professor Michael I. Jordan and Professor Martin J. Wainwright. Going further back in time, I finished my Phd degree in 2017 at the Department of Statistics, University of Michigan, Ann Arbor where I am very fortunate to be advised by Professor Long Nguyen and Professor Ya'acov Ritov.

Research Interests

A central theme of my research focuses on four important aspects of complex and large-scale models and data:

  • (1) Interpretability, Efficiency, and Robustness of deep learning and complex machine learning models, including Transformer architectures, Deep Generative Models, Convolutional Neural Networks, etc.;

  • (2) Scalability of Optimal Transport for machine learning and deep learning applications;

  • (3) Stability and Optimality of optimization and sampling algorithms for solving complex statistical machine learning models;

  • (4) Heterogeneity of complex data, including Mixture and Hierarchical Models, Bayesian Nonparametrics, etc.

For the first aspect (1), we utilize insight from statistical machine learning modeling and theories, Hamilton-Jacobi partial differential equation (PDE) to understand deep learning and complex machine learning models. Examples of our works include using mixture and hierarchical models (T.1, T.2) to improve the redundancy in Transformer, a recent state-of-the-art deep learning archiecture for language and computer vision applications, or interpreting Transformer using primal-dual frameworks from support vector regression (T.3). Furthermore, we also utilize Fourier Integral Theorem and its generalized version (T.4), a beautiful result in mathematics, to improve the interpretability and performance of Transformer. The Fourier Integral Theorem is also used in our other works to build estimators in other machine learning and statistics applications (T.5). Finally, we also develop a Bayesian deconvolution model (T.6) to understand Convolutional Neural Networks or provide a complete theory for neural collapse phenomenon in deep linear neural network (T.7).

For the second aspect (2), we focus on improving the scalability and curse of dimensionality of optimal transport in deep learning applications, such as deep generative model, domain adaptation, etc. For the curse of dimensionality, we propose several new variants of sliced optimal transport (OT.1, OT.2, OT.3, OT.4, OT.5) to not only circumvent the curse of dimensionality of optimal transport but also improve the sampling scheme and training procedure of sliced optimal transport to select the most important directions. For the scalability, we propose new minibatch frameworks (OT.6, OT.7) to improve the misspecified matching issues of the current minibatch optimal transport in the literature. Furthermore, we also develop several optimization algorithms with near optimal computational complexities (OT.8, OT.9, OT.10) for approximating the optimal transport and its variants.

For the third aspect (3), we study the interplay and trade-off between the instability, statistical accuracy, and computational efficiency of optimization and sampling algorithms (O.1) for solving parameter estimation in statistical machine learning models. Based on these insights, we provide a rigorous statistical behaviors of the Expectation-Maximization (EM) (O.2, O.3) algorithm for solving mixture models and of the factorized gradient descent for solving a class of low-rank matrix factorization problems (O.4). Finally, in the recent work, we propose the exponential schedule for gradient descent (O.5) and demonstrate that this algorithm obtains the optimal linear computational complexity for solving parameter estimation in statistical machine learning models.

For the fourth aspect (4), an example of our works includes the statistical and geometric behaviors of latent variables in mixture and hierarchical model via tools from optimal transport, quantization theory, and algebraic geometry. For example, we demonstrate that the convergence rates of the maximum likelihood estimation for finite mixture models and finite mixture of experts are determined by the solvability of some system of the polynomial equations, which is one of the key problems in algebraic geometry (M.1, M.2, M.3, M.4). These theories with the convergence rates of the MLE also lead to a novel model selection procedure (M.5) in finite and infinite mixture models.

Apart from these topics, we also study Bayesian inference and asymptotics from new perspectives. For example, we utilize diffusion process to establish the posterior convergence rate of parameters in statistical models (E.1) or employ Fourier Integral Theorem to establish the posterior consistency of Bayesian nonparametric models (E.2).


The official Github link for codes of research papers from our Data Science and Machine Learning (DSML) Lab is: https://github.com/UT-Austin-Data-Science-Group.

Editorial Boards of Journals

Area Chairs of Conferences in Machine Learning

Recent News

Selected Publications on Theory (Hierarchical and Mixture Models, (Non)-Convex Optimization, Optimal Transport, (Approximate) Bayesian Inference, etc.)

Selected Publications on Method and Application (Generative Models, Optimal Transport, Transformer, Convolutional Neural Networks, etc.)

  • Efficient Transformer with compressive sensing . Under review.
    Tan Minh Nguyen, Tho Tran Huu, Tam Minh Nguyen, Minh Pham, Stanley Osher, Nhat Ho.