Mahdi Haghifam
About MeI am currently a Distinguished Postdoctoral Researcher at Khoury College of Computer Sciences at Northeastern University, fortuante to working with Jonathan Ullman and Adam Smith. I completed my PhD at University of Toronto and Vector Institute where I was fortunate to be advised by Daniel M. Roy. I also received my B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology. My research focuses broadly on the foundations and methodologies for machine learning (ML). Recognitions of my work include a Best Paper Award at ICML 2024, Simons Institute (UC Berkeley) Research Fellowship Award, the MITACS Accelerate Fellowship, as well as several honors for graduate research excellence from University of Toronto, including the Henderson and Bassett Research Fellowship and the Viola Carless Smith Research Fellowship. Additionally, I was recognized as a top reviewer at NeurIPS in 2021 and 2023. Outside my research activities, I enjoy playing and watching soccer, reading classic literature, and baking. Research OverviewMy research focuses on the theoretical foundations and principled algorithm design for ML. More broadly, I am interested in statistical learning theory, statistics, and information theory. The central goal of my research is to address practical challenges in ML by developing tools and algorithms with rigorous theoretical guarantees that assess and ensure validity. This work is crucial for building trustworthy ML systems in high-stakes applications, where balancing responsible deployment with strong empirical performance is essential. Some of the questions I have been thinking about: When and how can models generalize beyond their training data? Under what conditions do they memorize sensitive information? And how can we preserve privacy while still learning effectively from sensitive data? Generalization in Machine Learning: Can we understand learning and generalization by studying the information complexity of learning algorithms?
Membership Inference and Memorization: Do accurate algorithms need to leak lots of information from their training data?
Differential Privacy: How can algorithms learn from sensitive data without revealing private information?
Internships and Research VisitsDuring Summer and Fall 2022, I was a research intern at Google Brain (Differential Privacy Team) where I was extremely lucky to be mentored by Thomas Steinke and Abhradeep Guha Thakurta. I was also a research intern at Element AI (ServiceNow Research Lab) in Winter 2019 and Fall 2020 where I had the privilege of working with Gintare Karolina Dziugaite in the Trustworthy AI Research Program. In early 2020, I had the opportunity to visit the Institute of Advanced Studies at Princeton as a visiting student for the special year program on optimization, statistics, and theoretical machine learning. Contact Me!Feel free to reach out if you'd like to discuss research ideas. Also, I'm happy to offer guidance and support to those applying to graduate programs, especially individuals who might not typically have access to such assistance |