Mahdi Haghifam

alt text 

Postdoctoral Researcher,
Khoury College of Computer Sciences at Northeastern University
[Google Scholar] [Github] [Twitter] [Linkedin]
Email(preferred): haghifam.mahdi@gmail.com
Email: m.haghifam@northeastern.edu


About Me

I am currently a Distinguished Postdoctoral Researcher at Khoury College of Computer Sciences at Northeastern University, fortuante to working with Jonathan Ullman‬ and Adam Smith‬.

I completed my PhD at University of Toronto and Vector Institute‬ where I was fortunate to be advised by ‪Daniel M. Roy‬. I also received my B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology.

My research focuses broadly on the foundations and methodologies for machine learning (ML). Recognitions of my work include a Best Paper Award at ICML 2024, Simons Institute (UC Berkeley) Research Fellowship Award, the MITACS Accelerate Fellowship, as well as several honors for graduate research excellence from University of Toronto, including the Henderson and Bassett Research Fellowship and the Viola Carless Smith Research Fellowship. Additionally, I was recognized as a top reviewer at NeurIPS in 2021 and 2023.

Outside my research activities, I enjoy playing and watching soccer, reading classic literature, and baking.

Research Overview

My research focuses on the theoretical foundations and principled algorithm design for ML. More broadly, I am interested in statistical learning theory, statistics, and information theory. The central goal of my research is to address practical challenges in ML by developing tools and algorithms with rigorous theoretical guarantees that assess and ensure validity. This work is crucial for building trustworthy ML systems in high-stakes applications, where balancing responsible deployment with strong empirical performance is essential. Some of the questions I have been thinking about: When and how can models generalize beyond their training data? Under what conditions do they memorize sensitive information? And how can we preserve privacy while still learning effectively from sensitive data?

Generalization in Machine Learning: Can we understand learning and generalization by studying the information complexity of learning algorithms?

  • Applications of information measures to reason about the generalization of practical algorithms (NeurIPS’19, NeurIPS’20).

  • Connections between generalization frameworks based on information measures and classical approaches in learning theory (such as VC theory and uniform stability) (NeurIPS’21, ALT’23)

Membership Inference and Memorization: Do accurate algorithms need to leak lots of information from their training data?

  • Exact tradeoff between learning and membership inference in the fundamental setup of stochastic convex optimization (ICML’24, Arxiv’25)

  • The statistical challenges of membership inference attacks (upcoming'25)

Differential Privacy: How can algorithms learn from sensitive data without revealing private information?

  • Practical algorithms that maintain worst-case privacy guarantees while adapting to dataset properties in order to achieve better performance (NeurIPS’24)

  • Faster private optimization algorithms using second-order methods (NeurIPS’23).

Internships and Research Visits

During Summer and Fall 2022‪, I was a research intern at Google Brain (Differential Privacy Team) where I was extremely lucky to be mentored by Thomas Steinke and Abhradeep Guha Thakurta‬. I was also a research intern at Element AI‬ (ServiceNow Research Lab) in Winter 2019 and Fall 2020 where I had the privilege of working with Gintare Karolina Dziugaite in the Trustworthy AI Research Program. In early 2020, I had the opportunity to visit the Institute of Advanced Studies at Princeton as a visiting student for the special year program on optimization, statistics, and theoretical machine learning.

Contact Me!

Feel free to reach out if you'd like to discuss research ideas. Also, I'm happy to offer guidance and support to those applying to graduate programs, especially individuals who might not typically have access to such assistance