|
Nhat Ho
Brief BiographyI am currently an Assistant Professor of Statistics and Data Sciences at the University of Texas at Austin. I am also a core member of the Machine Learning Laboratory and senior personnel of the Institute for Foundations of Machine Learning (IFML). Before going to Austin, I was a postdoctoral fellow in the Electrical Engineering and Computer Science (EECS) Department where I am very fortunate to be mentored by Professor Michael I. Jordan and Professor Martin J. Wainwright. Going further back in time, I finished my Phd degree in 2017 at the Department of Statistics, University of Michigan, Ann Arbor where I am very fortunate to be advised by Professor Long Nguyen and Professor Ya'acov Ritov. Research InterestsA central theme of my research focuses on four important aspects of complex and large-scale models and data:
--- For the first aspect (1), an example of our works includes the statistical and geometric behaviors of latent variables in sparse and high dimensional mixture and hierarchical model via tools from optimal transport, quantization theory, and algebraic geometry. For example, we demonstrate that the convergence rates of the maximum likelihood estimation for finite mixture models and input-independent gating mixture of experts are determined by the solvability of some system of the polynomial equations, which is one of the key problems in algebraic geometry (M.1, M.2, M.3, M.4). These theories with the convergence rates of the MLE also lead to a novel model selection procedure (M.5) in finite and infinite mixture models. Recently, we provided a comprehensive theory to long-standing open problem about parameter and expert estimation in softmax gating Gaussian mixture of experts (M.6), a class of conditional mixtures being widely used in machine learning and deep learning to scale up large-scale neural networks architectures. Our theory relies on defining novel Voronoi-based losses among parameters, which can precisely capture the intrinsic interaction (via partial differential equations with respect to model parameters) between the softmax gating function and the expert functions. In a subsequent work (M.7), we also established general theories for softmax gating mixture of experts with least-square loss function. Furtherore, we also carried these insight into understanding several other important variants of softmax gating mixture of experts that have been currently used in scaling up Transformer and Large Language Model, including top-K sparse mixture of experts (M.8), dense-to-sparse (equivalently temperature softmax) mixture of experts (M.9) and being used in other machine learning tasks (M.10, M.11). From the methodology and application sides, we recently developed novel effective training of sparse mixture of experts via competition for scaling up large-scale AI models (M.12) or utilized mixture of experts with Laplace gating function for Transformer, a recent state-of-the-art deep learning archiecture for language and computer vision applications, to develop large multimodal model for multimodal data appearing in eletronic health records (M.13).--- For the second aspect (2), we utilize insight from statistical machine learning modeling and theories, Hamilton-Jacobi partial differential equation (PDE) to understand deep learning and complex machine learning models. Examples of our works include using mixture and hierarchical models (T.1, T.2) to improve the redundancy in Transformer or interpreting Transformer using primal-dual frameworks from support vector regression (T.3). Furthermore, we also utilize Fourier Integral Theorem and its generalized version (T.4), a beautiful result in mathematics, to improve the interpretability and performance of Transformer. The Fourier Integral Theorem is also used in our other works to build estimators in other machine learning and statistics applications (T.5). Finally, we also develop a Bayesian deconvolution model (T.6) to understand Convolutional Neural Networks or provide a complete theory for neural collapse phenomenon in deep linear neural network (T.7). Recently, we also established guarantees for several interesting phenomena in training deep learning models, including Neural Collapse (T.8, T.9), and Posterior Collapse (T.10).--- For the third aspect (3), we focus on improving the scalability, efficiency, and curse of dimensionality of optimal transport in deep learning applications, such as deep generative model, domain adaptation, etc. For the efficiency and curse of dimensionality, we propose several new variants of sliced optimal transport (OT.1, OT.2, OT.3, OT.4, OT.5) to not only circumvent the curse of dimensionality of optimal transport but also improve the sampling scheme and training procedure of sliced optimal transport to select the most important directions. For the scalability, we propose new minibatch frameworks (OT.6, OT.7) to improve the misspecified matching issues of the current minibatch optimal transport in the literature. Furthermore, we also develop several optimization algorithms with near optimal computational complexities (OT.8, OT.9, OT.10) for approximating the optimal transport and its variants. From the application sides, we proposed using (sliced) optimal transport and its variants for building large pretrained models for medical images (OT.11), for audio-text retrieval (OT.12), for cortical surface reconstruction (OT.13), for molecular property prediction (OT.14), and for shape correspondence learning (OT.15).--- For the fourth aspect (4), we study the interplay and trade-off between the instability, statistical accuracy, and computational efficiency of optimization and sampling algorithms (O.1) for solving parameter estimation in statistical machine learning models. Based on these insights, we provide a rigorous statistical behaviors of the Expectation-Maximization (EM) (O.2, O.3) algorithm for solving mixture models and of the factorized gradient descent for solving a class of low-rank matrix factorization problems (O.4). Finally, in the recent work, we propose the exponential schedule for gradient descent (O.5) and demonstrate that this algorithm obtains the optimal linear computational complexity for solving parameter estimation in statistical machine learning models. Recently, we proposed a novel robust criterion for distributionally robust optimization by combining insights from Bayesian nonparametric (e.g., Dirichlet Process) theory and recent decision-theoretic models of smooth ambiguity-averse preferences (O.6).--- Apart from these topics, we also study Bayesian inference and asymptotics from new perspectives. For example, we utilize diffusion process to establish the posterior convergence rate of parameters in statistical models (E.1) or employ Fourier Integral Theorem to establish the posterior consistency of Bayesian nonparametric models (E.2). CodesThe official Github link for codes of research papers from our Data Science and Machine Learning (DSML) Lab is: https://github.com/UT-Austin-Data-Science-Group. Editorial Boards of JournalsArea Chairs of Conferences in Machine Learning and Artificial IntelligenceMedia CoverageData Science, Machine Learning, Statistics, and Artifical Intelligence have become very important fields in Vietnam these days. However, as these fields are still very young in Vietnam, young Vietnamese generation often faces challenges to equip themselves with enough information, knowledge, and skills to pursue their career paths in these fields. For this reason, several leading newspapers and shows in Vietnam covered my path and story of becoming a professor in the leading US university as well as my opinion about these fields to inspire and provide necessary information to young generation in Vietnam that would like to pursue their careers in Data Science, Machine Learning, Statistics, and Artifical Intelligence, including:
Recent News
Selected Publications on Theory (Hierarchical and Mixture Models, Bayesian Nonparametrics, Optimal Transport, Deep Learning, (Approximate) Bayesian Inference, (Non)-Convex Optimization, etc.)(* = equal contribution )
Selected Publications on Method and Application (Optimal Transport, Transformer, Deep Generative Models, 3D Deep Learning, Convolutional Neural Networks, etc.)
|