Papers
Asterisk (*) after last author denotes alphabetical ordering.
Preprints
Publications
A. Mousavi, D. Wu, and M.A. Erdogdu,
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics, ICLR 2025, Proceedings of International Conference on Learning Representations
A. Mousavi, A. Javanmard, and M.A. Erdogdu,
Robust Feature Learning for Multi-Index Models in High Dimensions, ICLR 2025, Proceedings of International Conference on Learning Representations
A. El Hanchi, C. Maddison, M.A. Erdogdu,
On the Efficiency of ERM in Feature Learning,
NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems
Y. He, A. Mousavi, K. Balasubramanian, M.A. Erdogdu,
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers,
NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems
S. Chewi, M.A. Erdogdu, M. Li, R. Shen, M. Zhang*,
Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev,
Journal of Foundations of Computational Mathematics 2024
> Conference version: COLT 2022, Annual Conference on Learning Theory
N.M. Vural and M.A. Erdogdu,
Pruning is Optimal for Learning Sparse Features in High-Dimensions,
COLT 2024, Proceedings of Annual Conference on Learning Theory
A. El Hanchi, C. Maddison, M.A. Erdogdu,
Minimax Linear Regression under the Quantile Risk,
COLT 2024, Proceedings of Annual Conference on Learning Theory
Y. Kook, M. Zhang, S. Chewi, M.A. Erdogdu, M. Li,
Sampling from the Mean-Field Stationary Distribution,
COLT 2024, Proceedings of Annual Conference on Learning Theory
Y. He, T. Farghly, K. Balasubramanian, and M.A. Erdogdu,
Mean-square Analysis of Discretized Ito Diffusions for Heavy-tailed Sampling,
JMLR 2024, Journal of Machine Learning Research
A. Mousavi, D. Wu, T. Suzuki, M.A. Erdogdu,
Gradient-Based Feature Learning under Structured Data,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
A. El Hanchi, M.A. Erdogdu,
Optimal Excess Risk Bounds for Empirical Risk Minimization on p-Norm Linear Regression,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu*,
Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
T. Kastner, M.A. Erdogdu, A. Farahmand,
Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
Y. He, K. Balasubramanian, M.A. Erdogdu,
An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling,
IEEE Transactions on Information Theory, 2023
A. Mousavi-Hosseini, T. Farghly, Y. He, K. Balasubramanian, M.A. Erdogdu,
Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincare Inequality,
COLT 2023, Proceedings of Annual Conference on Learning Theory
M.S. Zhang, S. Chewi, M. Li, K. Balasubramanian, M.A. Erdogdu,
Improved Discretization Analysis for Underdamped Langevin Monte Carlo,
COLT 2023, Proceedings of Annual Conference on Learning Theory
A. Mousavi-Hosseini, S. Park, M. Girotti, I. Mitliagkas, M.A. Erdogdu,
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD,
ICLR 2023 (Spotlight), Proceedings of International Conference on Learning Representations
M.B. Li and M.A. Erdogdu,
Riemannian Langevin Algorithm for Solving Semidefinite Programs,
Bernoulli Journal 2023
S. Park, U. Simsekli, M.A. Erdogdu,
Generalization Bounds for Stochastic Gradient Descent via Localized \(\epsilon\)-Covers,
NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems
J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu, G. Yang*,
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation,
NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems
N.M. Vural, L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu,
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance,
COLT 2022, Proceedings of Annual Conference on Learning Theory
K. Balasubramanian, S. Chewi, M.A. Erdogdu, A. Salim and M.S. Zhang*,
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo,
COLT 2022, Proceedings of Annual Conference on Learning Theory
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Understanding the Variance Collapse of SVGD in High Dimensions,
ICLR 2022, Proceedings of International Conference on Learning Representations
M.A. Erdogdu, R. Hosseinzadeh and M.S. Zhang*,
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence,
AISTATS 2022, Proceedings of International Conference on Artificial Intelligence and Statistics
M.S. Zhang, M.A. Erdogdu, and A. Garg,
Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings,
AAAI 2022, Proceedings of the Association for the Advancement of Artificial Intelligence
M.A. Erdogdu, A. Ozdaglar, P. Parillo, and N.D. Vanli*,
Convergence Rate of Block-Coordinate Maximization Burer-Monteiro Method for Solving Large SDPs,
Mathematical Programming Series A, 2021
A. Roy, K. Balasubramanian, and M.A. Erdogdu,
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
A. Camuto, G. Deligiannidis, M.A. Erdogdu, M. Gurbuzbalaban, U. Simsekli, L. Zhu*,
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms,
NeurIPS 2021 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
H. Wang, M. Gurbuzbalaban, L. Zhu, U. Simsekli and M.A. Erdogdu,
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu,
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
I. Shumailov, Z. Shumaylov, D. Kazhdan, Y. Zhao, N. Papernot, M.A. Erdogdu, and R. Anderson,
Manipulating SGD with Data Ordering Attacks,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
M. Barsbey, M. Sefidgaran, M.A. Erdogdu, G. Richard, U. Simsekli,
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
M.A. Erdogdu and R. Hosseinzadeh*, On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness,
COLT 2021, Proceedings of Annual Conference on Learning Theory
U. Simsekli, O. Sener, G. Deligiannidis, M.A. Erdogdu,
Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks,
NeurIPS 2020 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
> Journal version (Invited): JSTAT 2022, Journal of Statistical Mechanics: Theory and Experiment
Y. He, K. Balasubramanian and M.A. Erdogdu,
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method,
NeurIPS 2020, Proceedings of Advances in Neural Information Processing Systems
J. Ba, M.A. Erdogdu, T. Suzuki, D. Wu and T. Zhang*,
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint,
ICLR 2020 (Spotlight), Proceedings of International Conference on Learning Representations
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scalable Approximations for Generalized Linear Problems,
JMLR 2019, Journal of Machine Learning Research
A. Anastasiou, K. Balasubramanian and M.A. Erdogdu*,
Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT,
COLT 2019, Proceedings of Annual Conference on Learning Theory
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Towards Characterizing the High-dimensional Bias of Kernel-based Particle Inference Algorithms,
AABI 2019, Symposium on Advances in Approximate Bayesian Inference
X. Li, D. Wu, L. Mackey and M.A. Erdogdu,
Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond,
NeurIPS 2019 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
M.A. Erdogdu, L. Mackey and O. Shamir*,
Global Non-convex Optimization with Discretized Diffusions,
NeurIPS 2018, Proceedings of Advances in Neural Information Processing Systems
L.H. Dicker and M.A. Erdogdu*,
Flexible results for quadratic forms with applications to variance components estimation,
Annals of Statistics 2018
M.A. Erdogdu, Y. Deshpande and A. Montanari,
Inference in Graphical Models via Semidefinite Programming Hierarchies,
NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems
H. Inan, M.A. Erdogdu and M. Schnitzer,
Robust Estimation of Neural Signals in Calcium Imaging,
NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems
M.A. Erdogdu,
Newton-Stein Method: An optimization method for GLMs via Stein's lemma,
JMLR 2016, Journal of Machine Learning Research
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scaled Least Squares Estimator for GLMs in Large-Scale Problems,
NeurIPS 2016, Proceedings of Advances in Neural Information Processing Systems
L.H. Dicker and M.A. Erdogdu*,
Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models,
AISTATS 2016, Proceedings of International Conference on Artificial Intelligence and Statistics
M.A. Erdogdu,
Newton-Stein Method: A second order method for GLMs via Stein's lemma,
NeurIPS 2015 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
M.A. Erdogdu and A. Montanari*,
Convergence rates of sub-sampled Newton methods,
NeurIPS 2015, Proceedings of Advances in Neural Information Processing Systems
R. Kolte, M.A. Erdogdu and A. Ozgur,
Accelerating SVRG via second-order information,
NeurIPS 2015, Advances in Neural Information Processing Systems . Workshop OptML
Q. Zhao, M.A. Erdogdu, H. He, A. Rajaraman and J. Leskovec,
SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity,
KDD 2015, Proceedings of Conference on Knowledge Discovery and Data Mining
M.A. Erdogdu and N. Fawaz,
Privacy-utility trade-off under continual observation,
ISIT 2015, Proceedings of IEEE International Symposium on Information Theory
M.A. Erdogdu, N. Fawaz and A. Montanari,
Privacy-utility trade-off for time-series with application to smart-meter data,
AAAI 2015, Association for the Advancement in Artificial Intelligence, Workshop in Computational Sustainability
M. Bayati, M.A. Erdogdu and A. Montanari*,
Estimating LASSO Risk and Noise Level,
NeurIPS 2013, Proceedings of Advances in Neural Information Processing Systems
-
A. Mousavi, D. Wu, and M.A. Erdogdu,
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics, ICLR 2025, Proceedings of International Conference on Learning Representations
-
A. Mousavi, A. Javanmard, and M.A. Erdogdu,
Robust Feature Learning for Multi-Index Models in High Dimensions, ICLR 2025, Proceedings of International Conference on Learning Representations
-
A. El Hanchi, C. Maddison, M.A. Erdogdu,
On the Efficiency of ERM in Feature Learning,
NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems
-
N.M. Vural and M.A. Erdogdu,
Pruning is Optimal for Learning Sparse Features in High-Dimensions,
COLT 2024, Proceedings of Annual Conference on Learning Theory
-
A. Mousavi, D. Wu, T. Suzuki, M.A. Erdogdu,
Gradient-Based Feature Learning under Structured Data,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
-
J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu*,
Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
-
T. Kastner, M.A. Erdogdu, A. Farahmand,
Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
-
A. Mousavi-Hosseini, S. Park, M. Girotti, I. Mitliagkas, M.A. Erdogdu,
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD,
ICLR 2023 (Spotlight), Proceedings of International Conference on Learning Representations
-
S. Park, U. Simsekli, M.A. Erdogdu,
Generalization Bounds for Stochastic Gradient Descent via Localized \(\epsilon\)-Covers,
NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems
-
J. Ba, M.A. Erdogdu, T. Suzuki, Z. Wang, D. Wu, G. Yang*,
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation,
NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems
-
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Understanding the Variance Collapse of SVGD in High Dimensions,
ICLR 2022, Proceedings of International Conference on Learning Representations
-
A. Camuto, G. Deligiannidis, M.A. Erdogdu, M. Gurbuzbalaban, U. Simsekli, L. Zhu*,
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms,
NeurIPS 2021 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
-
M. Barsbey, M. Sefidgaran, M.A. Erdogdu, G. Richard, U. Simsekli,
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
-
U. Simsekli, O. Sener, G. Deligiannidis, M.A. Erdogdu,
Hausdorff Dimension, Stochastic Differential Equations, and Generalization in Neural Networks,
NeurIPS 2020 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
> Journal version (Invited): JSTAT 2022, Journal of Statistical Mechanics: Theory and Experiment
-
J. Ba, M.A. Erdogdu, T. Suzuki, D. Wu and T. Zhang*,
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint,
ICLR 2020 (Spotlight), Proceedings of International Conference on Learning Representations
-
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Towards Characterizing the High-dimensional Bias of Kernel-based Particle Inference Algorithms,
AABI 2019, Symposium on Advances in Approximate Bayesian Inference
-
Y. He, A. Mousavi, K. Balasubramanian, M.A. Erdogdu,
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers,
NeurIPS 2024, Proceedings of Advances in Neural Information Processing Systems
-
S. Chewi, M.A. Erdogdu, M. Li, R. Shen, M. Zhang*,
Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev,
Journal of Foundations of Computational Mathematics 2024
> Conference version: COLT 2022, Annual Conference on Learning Theory
-
Y. Kook, M. Zhang, S. Chewi, M.A. Erdogdu, M. Li,
Sampling from the Mean-Field Stationary Distribution,
COLT 2024, Proceedings of Annual Conference on Learning Theory
-
Y. He, T. Farghly, K. Balasubramanian, and M.A. Erdogdu,
Mean-square Analysis of Discretized Ito Diffusions for Heavy-tailed Sampling,
JMLR 2024, Journal of Machine Learning Research
-
Y. He, K. Balasubramanian, M.A. Erdogdu,
An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling,
IEEE Transactions on Information Theory, 2023
-
A. Mousavi-Hosseini, T. Farghly, Y. He, K. Balasubramanian, M.A. Erdogdu,
Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincare Inequality,
COLT 2023, Proceedings of Annual Conference on Learning Theory
-
M.S. Zhang, S. Chewi, M. Li, K. Balasubramanian, M.A. Erdogdu,
Improved Discretization Analysis for Underdamped Langevin Monte Carlo,
COLT 2023, Proceedings of Annual Conference on Learning Theory
-
M.B. Li and M.A. Erdogdu,
Riemannian Langevin Algorithm for Solving Semidefinite Programs,
Bernoulli Journal 2023
-
K. Balasubramanian, S. Chewi, M.A. Erdogdu, A. Salim and M.S. Zhang*,
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo,
COLT 2022, Proceedings of Annual Conference on Learning Theory
-
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Understanding the Variance Collapse of SVGD in High Dimensions,
ICLR 2022, Proceedings of International Conference on Learning Representations
-
M.A. Erdogdu, R. Hosseinzadeh and M.S. Zhang*,
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence,
AISTATS 2022, Proceedings of International Conference on Artificial Intelligence and Statistics
-
M.S. Zhang, M.A. Erdogdu, and A. Garg,
Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings,
AAAI 2022, Proceedings of the Association for the Advancement of Artificial Intelligence
-
M.A. Erdogdu and R. Hosseinzadeh*, On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness,
COLT 2021, Proceedings of Annual Conference on Learning Theory
-
Y. He, K. Balasubramanian and M.A. Erdogdu,
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method,
NeurIPS 2020, Proceedings of Advances in Neural Information Processing Systems
-
J. Ba, M.A. Erdogdu, M. Ghassemi, T. Suzuki, S. Sun, D. Wu and T. Zhang*,
Towards Characterizing the High-dimensional Bias of Kernel-based Particle Inference Algorithms,
AABI 2019, Symposium on Advances in Approximate Bayesian Inference
-
X. Li, D. Wu, L. Mackey and M.A. Erdogdu,
Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond,
NeurIPS 2019 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu, L. Mackey and O. Shamir*,
Global Non-convex Optimization with Discretized Diffusions,
NeurIPS 2018, Proceedings of Advances in Neural Information Processing Systems
-
A. El Hanchi, C. Maddison, M.A. Erdogdu,
Minimax Linear Regression under the Quantile Risk,
COLT 2024, Proceedings of Annual Conference on Learning Theory
-
A. El Hanchi, M.A. Erdogdu,
Optimal Excess Risk Bounds for Empirical Risk Minimization on p-Norm Linear Regression,
NeurIPS 2023, Proceedings of Advances in Neural Information Processing Systems
-
A. Roy, K. Balasubramanian, and M.A. Erdogdu,
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scalable Approximations for Generalized Linear Problems,
JMLR 2019, Journal of Machine Learning Research
-
L.H. Dicker and M.A. Erdogdu*,
Flexible results for quadratic forms with applications to variance components estimation,
Annals of Statistics 2018
-
M.A. Erdogdu, Y. Deshpande and A. Montanari,
Inference in Graphical Models via Semidefinite Programming Hierarchies,
NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems
-
H. Inan, M.A. Erdogdu and M. Schnitzer,
Robust Estimation of Neural Signals in Calcium Imaging,
NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scaled Least Squares Estimator for GLMs in Large-Scale Problems,
NeurIPS 2016, Proceedings of Advances in Neural Information Processing Systems
-
L.H. Dicker and M.A. Erdogdu*,
Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models,
AISTATS 2016, Proceedings of International Conference on Artificial Intelligence and Statistics
-
Q. Zhao, M.A. Erdogdu, H. He, A. Rajaraman and J. Leskovec,
SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity,
KDD 2015, Proceedings of Conference on Knowledge Discovery and Data Mining
-
M.A. Erdogdu and N. Fawaz,
Privacy-utility trade-off under continual observation,
ISIT 2015, Proceedings of IEEE International Symposium on Information Theory
-
M.A. Erdogdu, N. Fawaz and A. Montanari,
Privacy-utility trade-off for time-series with application to smart-meter data,
AAAI 2015, Association for the Advancement in Artificial Intelligence, Workshop in Computational Sustainability
-
M. Bayati, M.A. Erdogdu and A. Montanari*,
Estimating LASSO Risk and Noise Level,
NeurIPS 2013, Proceedings of Advances in Neural Information Processing Systems
-
M.B. Li and M.A. Erdogdu,
Riemannian Langevin Algorithm for Solving Semidefinite Programs,
Bernoulli Journal 2023
-
S. Park, U. Simsekli, M.A. Erdogdu,
Generalization Bounds for Stochastic Gradient Descent via Localized \(\epsilon\)-Covers,
NeurIPS 2022, Proceedings of Advances in Neural Information Processing Systems
-
N.M. Vural, L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu,
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance,
COLT 2022, Proceedings of Annual Conference on Learning Theory
-
M.A. Erdogdu, A. Ozdaglar, P. Parillo, and N.D. Vanli*,
Convergence Rate of Block-Coordinate Maximization Burer-Monteiro Method for Solving Large SDPs,
Mathematical Programming Series A, 2021
-
H. Wang, M. Gurbuzbalaban, L. Zhu, U. Simsekli and M.A. Erdogdu,
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
-
L. Yu, K. Balasubramanian, S. Volgushev and M.A. Erdogdu,
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
-
I. Shumailov, Z. Shumaylov, D. Kazhdan, Y. Zhao, N. Papernot, M.A. Erdogdu, and R. Anderson,
Manipulating SGD with Data Ordering Attacks,
NeurIPS 2021, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scalable Approximations for Generalized Linear Problems,
JMLR 2019, Journal of Machine Learning Research
-
A. Anastasiou, K. Balasubramanian and M.A. Erdogdu*,
Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT,
COLT 2019, Proceedings of Annual Conference on Learning Theory
-
M.A. Erdogdu, L. Mackey and O. Shamir*,
Global Non-convex Optimization with Discretized Diffusions,
NeurIPS 2018, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu, Y. Deshpande and A. Montanari,
Inference in Graphical Models via Semidefinite Programming Hierarchies,
NeurIPS 2017, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu,
Newton-Stein Method: An optimization method for GLMs via Stein's lemma,
JMLR 2016, Journal of Machine Learning Research
-
M.A. Erdogdu, M. Bayati and L.H. Dicker,
Scaled Least Squares Estimator for GLMs in Large-Scale Problems,
NeurIPS 2016, Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu,
Newton-Stein Method: A second order method for GLMs via Stein's lemma,
NeurIPS 2015 (Spotlight), Proceedings of Advances in Neural Information Processing Systems
-
M.A. Erdogdu and A. Montanari*,
Convergence rates of sub-sampled Newton methods,
NeurIPS 2015, Proceedings of Advances in Neural Information Processing Systems
-
R. Kolte, M.A. Erdogdu and A. Ozgur,
Accelerating SVRG via second-order information,
NeurIPS 2015, Advances in Neural Information Processing Systems . Workshop OptML
Thesis
|