Home     Background     Publications     Experience     Code    
 


Mehrdad Mahdavi
Research Assistant Professor
Toyota Technological Institute at Chicago (TTIC)


Email: * [at] {ttic | uchicago} [dot] edu (replace * by mahdavi)
Email: * . # [at] gmail [dot] com (replace * by mehrdad and # by mahdavi)

CV    



 

About Me



I am a research assistant professor at TTI Chicago, a philanthropically endowed academic computer science institute on the University of Chicago campus. I obtained my Ph.D. under the supervision of Prof. Rong Jin from Michigan State University in 2014. Before joining MSU at 2009, I spent two years as a Ph.D. student at Sharif University of Technology. I received my M.Sc in Computer Engineering department at Sharif University of Technology where my advisor was Prof. Mohammad Ghodsi, and my BS from Amirkabir University of Technology (Tehran Polytechnic). In the past I have worked at Microsoft Reaserch and NEC Laboratories America as research intern.



Research Interests

I am broadly interested in machine learning and design and analysis of algorithms with a focus on:
  • Online Learning and Online (Stochastic) Convex Optimization
  • Large Scale Machine Learning and Big Data
  • Statistical and Computational Learning Theory
  • Convex and Combinatorial Optimization Theory


Selected Papers [more]

  • Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization [Errata] [Abstract]
    with Lijun Zhang and Rong Jin
    Conference on Learning Theory (COLT), 2015.

  • Random Projections for Classification: A Recovery Approach [Abstract]
    with Lijun Zhang, Rong Jin, Tianbao Yang, and Shenghuo Zhu
    IEEE Transactions on Information Theory, 2014.

  • Exploiting Smoothness in Statistical Learning, Sequential Prediction, and Stochastic Optimization [Slides]
    PhD Thesis, 2014.

  • Regret Bounded by Gradual Variation for Online Convex Optimization [Abstract]
    with Tianbao Yang, Rong Jin, and Shenghuo Zhu
    Machine Learning Journal, 2013.

  • Mixed Optimization for Smooth Functions [Abstract] [SUPP]
    with Lijun Zhang and Rong Jin
    Advances in Neural Information Processing Systems (NIPS), 2013.

  • Stochastic Convex Optimization with Multiple Objectives [Abstract]
    with Tianbao Yang and Rong Jin
    Advances in Neural Information Processing Systems (NIPS), 2013.

  • Passive Learning with Target Risk [Abstract] [Slides]
    with Rong Jin
    Conference on Learning Theory (COLT), 2013.

  • Stochastic Gradient Descent with Only One Projection [Abstract] [SUPP]
    with Tianbao Yang, Rong Jin and S. Zhu
    Advances in Neural Information Processing Systems (NIPS), 2012.

  • Online Optimization with Gradual Variations [Abstract]
    with Chao-Kai Chiang, Tianbao Yang, C. J. Lee, C. J. Lu, Rong Jin and S. Zhu
    Conference on Learning Theory (COLT), 2012.
    This paper is a merge between two independent COLT submissions that both applied the the same idea to obtain regret bounded by gradual variation. Please see the Commentary written by Satyen Kale.
    Winner of the Mark Fulk Best Student Paper Award

  • Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints [Abstract]
    with Rong Jin and Tianbao Yang
    Journal of Machine Learning Research (JMLR), 2012.




  •  
    View Mehrdad  Mahdavi's LinkedIn profile View Mehrdad  Mahdavi's LinkedIn profile View Mehrdad  Mahdavi's LinkedIn profile View Mehrdad  Mahdavi's LinkedIn profile


    Home     Background     Publications     Experience     Code