Mahdi Haghifam

alt text 

Postdoctoral Researcher,
Khoury College of Computer Sciences at Northeastern University
[Google Scholar] [Github] [Twitter] [Linkedin]
Email(preferred): haghifam.mahdi@gmail.com
Email: m.haghifam@northeastern.edu


About Me

I am currently a Distinguished Postdoctoral Researcher at Khoury College of Computer Sciences at Northeastern University, fortuante to working with Jonathan Ullman‬ and Adam Smith‬.

In August 2023, I completed my PhD at University of Toronto and Vector Institute‬ where I was fortunate to be advised by ‪Daniel M. Roy‬. I also received my B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology.

My research focuses broadly on the foundations and methodologies for trustworthy machine learning. Recognitions of my work include a Best Paper Award at ICML 2024, the MITACS Accelerate Fellowship, as well as several honors for graduate research excellence, including the Henderson and Bassett Research Fellowship and the Viola Carless Smith Research Fellowship. Additionally, I was recognized as a top reviewer at NeurIPS in 2021 and 2023.

Research

My research aims to make AI systems fundamentally more trustworthy by developing principled algorithms and mathematical frameworks for understanding how machines learn from data - tackling core questions like: How and when can models generalize beyond their training data? When do they memorize sensitive information? How can we preserve privacy while learning from sensitive data?

Generalization in Machine Learning: Can we understand learning and generalization by studying the information complexity of learning algorithms?

  • Applications of information measures to reason about the generalization of practical algorithms (NeurIPS’19, NeurIPS’20).

  • Connections between generalization frameworks based on information measures and classical approaches in learning theory (such as VC theory and uniform stability) (NeurIPS’21, ALT’23)

Membership Inference and Memorization: Do accurate algorithms need to leak lots of information from their training data?

  • Exact tradeoff between learning and membership inference in the fundamental setup of stochastic convex optimization (ICML’24, Arxiv’25)

  • The statistical challenges of membership inference attacks (upcoming'25)

Differential Privacy: How can algorithms learn from sensitive data without revealing private information?

  • Practical algorithms that maintain worst-case privacy guarantees while adapting to dataset properties in order to achieve better performance (NeurIPS’24)

  • Faster private optimization algorithms using second-order methods (NeurIPS’23).

Internships and Research Visits

During Summer and Fall 2022‪, I was a research intern at Google Brain (Differential Privacy Team) where I was extremely lucky to be mentored by Thomas Steinke and Abhradeep Guha Thakurta‬. I was also a research intern at Element AI‬ (ServiceNow Research Lab) in Winter 2019 and Fall 2020 where I had the privilege of working with Gintare Karolina Dziugaite in the Trustworthy AI Research Program. In early 2020, I had the opportunity to visit the Institute of Advanced Studies at Princeton as a visiting student for the special year program on optimization, statistics, and theoretical machine learning.

Contact Me!

Feel free to reach out if you'd like to discuss research ideas. Also, I'm happy to offer guidance and support to those applying to graduate programs, especially individuals who might not typically have access to such assistance