Mahdi Haghifam

alt text  Email(preferred): haghifam.mahdi@gmail.com
Email: mhaghifam@ttic.edu


About Me

I am an ML researcher interested in developing principled algorithms for reliable and efficient AI/ML. My work spans foundations of ML — including optimization and generalization — as well as privacy-preserving ML, privacy auditing, and memorization in learning systems. Recently, I've been developing principled inference-time algorithms to address challenges in LLM safety and efficiency.

I'm currently a Research Assistant Professor at TTIC. Previously, I was a Distinguished Postdoctoral Researcher at Northeastern University, working with Jonathan Ullman and Adam Smith. I completed my Ph.D. in Machine Learning (thesis) at the University of Toronto / Vector Institute, advised by Daniel M. Roy. I also hold B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology.

I've worked in industry at Google DeepMind (with Thomas Steinke), where I built second-order differentially private optimization methods achieving 10–40× speedups over standard baselines, and at ServiceNow Research (with Gintare Karolina Dziugaite), where I developed data-dependent generalization estimation tools for deep networks. See details here.

Recognitions include a Best Paper Award at ICML 2024 (top 10 of 10,000 submissions), a Simons Institute–UC Berkeley Research Fellowship, and Top Reviewer at NeurIPS 2021 and 2023.

Research Overview and Selected Papers

I work on foundations and algorithms for machine learning, especially in settings where rigorous guarantees are important. My research aims to develop principled methods for modern ML systems that are both theoretically grounded and practically useful. Recently, my work has focused on reliable and efficient AI, spanning privacy-preserving learning, memorization, model stealing in large language models, and inference-time methods for controlling and verifying model behavior.

Industry Internship Experience

Google DeepMind| Research Intern
Mountain View| September 2022 – December 2022
Mentors: Thomas Steinke
- Developed second-order differentially private optimization method achieving 10–40× wall-clock speedups over DP-SGD baselines at comparable accuracy/privacy
- Resulted in publications at NeurIPS 2023 (link) (code) and ICML 2023 (link)

ServiceNow Research | Research Intern
Toronto | November 2020 – March 2021
Mentor: Gintare Karolina Dziugaite
- Studied the connections between different generalization approaches in ML
- Resulted in publication in NeurIPS 2021 (Spotlight) (link)

ServiceNow Research | Research Intern
Toronto | February 2019 – May 2019
Mentor: Gintare Karolina Dziugaite
- Proposed data-dependent generalization estimates for noisy SGD / SGLD using gradient disagreement. NeurIPS 2019 (link)
- The proposed method achieves for the first time non-vacuous generalization bound in various modern ML setups.
- Built generalization-prediction tools in TensorFlow for CNNs and MLP on image classification tasks. (code)

Contact Me!

Feel free to reach out if you'd like to discuss research ideas. Also, I'm happy to offer guidance and support to those applying to graduate programs, especially individuals who might not typically have access to such assistance