Papers

Asterisk (*) after last author denotes alphabetical ordering.

Preprints

Publications

[all papers | learning | sampling | statistics | optimization]

  1. A. Mousavi, D. Wu, and M.A. Erdogdu, Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics, ICLR 2025, Proceedings of International Conference on Learning Representations

  2. A. Mousavi, A. Javanmard, and M.A. Erdogdu, Robust Feature Learning for Multi-Index Models in High Dimensions, ICLR 2025, Proceedings of International Conference on Learning Representations

  3. A. El Hanchi, C. Maddison, M.A. Erdogdu, On the Efficiency of ERM in Feature Learning, NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems

  4. Y. He, A. Mousavi, K. Balasubramanian, M.A. Erdogdu, A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers, NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems

  5. S. Chewi, M.A. Erdogdu, M. Li, R. Shen, M. Zhang*, Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev, Journal of Foundations of Computational Mathematics 2024
    > Conference version: COLT 2022, Annual Conference on Learning Theory

  6. N.M. Vural and M.A. Erdogdu, Pruning is Optimal for Learning Sparse Features in High-Dimensions, COLT 2024, Proceedings of Annual Conference on Learning Theory

  7. A. El Hanchi, C. Maddison, M.A. Erdogdu, Minimax Linear Regression under the Quantile Risk, COLT 2024, Proceedings of Annual Conference on Learning Theory

  8. Y. Kook, M. Zhang, S. Chewi, M.A. Erdogdu, M. Li, Sampling from the Mean-Field Stationary Distribution, COLT 2024, Proceedings of Annual Conference on Learning Theory

  9. Y. He, T. Farghly, K. Balasubramanian, and M.A. Erdogdu, Mean-square Analysis of Discretized Ito Diffusions for Heavy-tailed Sampling, JMLR 2024, Journal of Machine Learning Research

  10. A. Mousavi, D. Wu, T. Suzuki, M.A. Erdogdu, Gradient-Based Feature Learning under Structured Data, NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems

  11. A. El Hanchi, M.A. Erdogdu, Optimal Excess Risk Bounds for Empirical Risk Minimization on p-Norm Linear Regression, NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems

  12. J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu*, Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective, NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems

  13. T. Kastner, M.A. Erdogdu, A. Farahmand, Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems

  14. Y. He, K. Balasubramanian, M.A. Erdogdu, An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling, IEEE Transactions on Information Theory, 2023

  15. A. Mousavi-Hosseini, T. Farghly, Y. He, K. Balasubramanian, M.A. Erdogdu, Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincare Inequality, COLT 2023, Proceedings of Annual Conference on Learning Theory

  16. M.S. Zhang, S. Chewi, M. Li, K. Balasubramanian, M.A. Erdogdu, Improved Discretization Analysis for Underdamped Langevin Monte Carlo, COLT 2023, Proceedings of Annual Conference on Learning Theory

  17. A. Mousavi-Hosseini, S. Park, M. Girotti, I. Mitliagkas, M.A. Erdogdu, Neural Networks Efficiently Learn Low-Dimensional Representations with SGD, ICLR 2023 (Spotlight), Proceedings of International Conference on Learning Representations

  18. M.B. Li and M.A. Erdogdu, Riemannian Langevin Algorithm for Solving Semidefinite Programs, Bernoulli Journal 2023

  19. S. Park, U. Simsekli, M.A. Erdogdu, Generalization Bounds for Stochastic Gradient Descent via Localized \(\epsilon\)-Covers, NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems

  20. J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu, G. Yang*, High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation, NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems

  21. N.M. Vural, L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu, Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance, COLT 2022, Proceedings of Annual Conference on Learning Theory

  22. K. Balasubramanian, S. Chewi, M.A. Erdogdu, A. Salim and M.S. Zhang*, Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo, COLT 2022, Proceedings of Annual Conference on Learning Theory

  23. J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*, Understanding the Variance Collapse of SVGD in High Dimensions, ICLR 2022, Proceedings of International Conference on Learning Representations

  24. M.A. Erdogdu, R. Hosseinzadeh and M.S. Zhang*, Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence, AISTATS 2022, Proceedings of International Conference on Artificial Intelligence and Statistics

  25. M.S. Zhang, M.A. Erdogdu, and A. Garg, Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings, AAAI 2022, Proceedings of the Association for the Advancement of Artificial Intelligence

  26. M.A. Erdogdu, A. Ozdaglar, P. Parillo, and N.D. Vanli*, Convergence Rate of Block-Coordinate Maximization Burer-Monteiro Method for Solving Large SDPs, Mathematical Programming Series A, 2021

  27. A. Roy, K. Balasubramanian, and M.A. Erdogdu, On Empirical Risk Minimization with Dependent and Heavy-Tailed Data, NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems

  28. A. Camuto, G. Deligiannidis, M.A. Erdogdu, M. Gurbuzbalaban, U. Simsekli, L. Zhu*, Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms, NeurIPS 2021 (Spotlight), Proceedings of Advances in Neural Information Processing Systems

  29. H. Wang, M. Gurbuzbalaban, L. Zhu, U. Simsekli and M.A. Erdogdu, Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance, NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems

  30. L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu, An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias, NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems

  31. I. Shumailov, Z. Shumaylov, D. Kazhdan, Y. Zhao, N. Papernot, M.A. Erdogdu, and R. Anderson, Manipulating SGD with Data Ordering Attacks, NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems

  32. M. Barsbey, M. Sefidgaran, M.A. Erdogdu, G. Richard, U. Simsekli, Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks, NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems

  33. M.A. Erdogdu and R. Hosseinzadeh*, On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness, COLT 2021, Proceedings of Annual Conference on Learning Theory

  34. U. Simsekli, O. Sener, G. Deligiannidis, M.A. Erdogdu, Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks, NeurIPS 2020 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
    > Journal version (Invited): JSTAT 2022, Journal of Statistical Mechanics: Theory and Experiment

  35. Y. He, K. Balasubramanian and M.A. Erdogdu, On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method, NeurIPS 2020, Proceedings of Advances in Neural Information Processing Systems

  36. J. Ba, M.A. Erdogdu, T. Suzuki, D. Wu and T. Zhang*, Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint, ICLR 2020 (Spotlight), Proceedings of International Conference on Learning Representations

  37. M.A. Erdogdu, M. Bayati and L.H. Dicker, Scalable Approximations for Generalized Linear Problems, JMLR 2019, Journal of Machine Learning Research

  38. A. Anastasiou, K. Balasubramanian and M.A. Erdogdu*, Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT, COLT 2019, Proceedings of Annual Conference on Learning Theory

  39. J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*, Towards Characterizing the High-dimensional Bias of Kernel-based Particle Inference Algorithms, AABI 2019, Symposium on Advances in Approximate Bayesian Inference

  40. X. Li, D. Wu, L. Mackey and M.A. Erdogdu, Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond, NeurIPS 2019 (Spotlight), Proceedings of Advances in Neural Information Processing Systems

  41. M.A. Erdogdu, L. Mackey and O. Shamir*, Global Non-convex Optimization with Discretized Diffusions, NeurIPS 2018, Proceedings of Advances in Neural Information Processing Systems

  42. L.H. Dicker and M.A. Erdogdu*, Flexible results for quadratic forms with applications to variance components estimation, Annals of Statistics 2018

  43. M.A. Erdogdu, Y. Deshpande and A. Montanari, Inference in Graphical Models via Semidefinite Programming Hierarchies, NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems

  44. H. Inan, M.A. Erdogdu and M. Schnitzer, Robust Estimation of Neural Signals in Calcium Imaging, NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems

  45. M.A. Erdogdu, Newton-Stein Method: An optimization method for GLMs via Stein's lemma, JMLR 2016, Journal of Machine Learning Research

  46. M.A. Erdogdu, M. Bayati and L.H. Dicker, Scaled Least Squares Estimator for GLMs in Large-Scale Problems, NeurIPS 2016, Proceedings of Advances in Neural Information Processing Systems

  47. L.H. Dicker and M.A. Erdogdu*, Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models, AISTATS 2016, Proceedings of International Conference on Artificial Intelligence and Statistics

  48. M.A. Erdogdu, Newton-Stein Method: A second order method for GLMs via Stein's lemma, NeurIPS 2015 (Spotlight), Proceedings of Advances in Neural Information Processing Systems

  49. M.A. Erdogdu and A. Montanari*, Convergence rates of sub-sampled Newton methods, NeurIPS 2015, Proceedings of Advances in Neural Information Processing Systems

  50. R. Kolte, M.A. Erdogdu and A. Ozgur, Accelerating SVRG via second-order information, NeurIPS 2015, Advances in Neural Information Processing Systems . Workshop OptML

  51. Q. Zhao, M.A. Erdogdu, H. He, A. Rajaraman and J. Leskovec, SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity, KDD 2015, Proceedings of Conference on Knowledge Discovery and Data Mining

  52. M.A. Erdogdu and N. Fawaz, Privacy-utility trade-off under continual observation, ISIT 2015, Proceedings of IEEE International Symposium on Information Theory

  53. M.A. Erdogdu, N. Fawaz and A. Montanari, Privacy-utility trade-off for time-series with application to smart-meter data, AAAI 2015, Association for the Advancement in Artificial Intelligence, Workshop in Computational Sustainability

  54. M. Bayati, M.A. Erdogdu and A. Montanari*, Estimating LASSO Risk and Noise Level, NeurIPS 2013, Proceedings of Advances in Neural Information Processing Systems

Thesis