Any part of an industry that uses machine learning currently will need solutions that are robust and reliable. Anything that people use machine learning for from healthcare to finance to transportation to even ads like theatre and content. Anywhere machine learning seems like a good idea, making it robust and reliable is key.”
Aleksander Madry
Deep Learning

Madry runs a lab called “Madry Lab” with the focus of the lab being on the science of modern machine learning. There, they aim to combine theoretical and empirical insights to build a principled and thorough understanding of key techniques in machine learning such as deep learning, as well as the challenges we face in this context. One of the major themes they investigate is rethinking machine learning from the perspective of security and robustness.

The lab is lead by Madry and contains a mix of graduate students and undergraduate students. They have come up with several papers such as Clean-Label Backdoor Attacks, Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms, Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability, Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors, Robustness May Be at Odds with Accuracy, and many others.

Research and Works

Aleksander Madry’s research aims to identify and tackle key algorithmic challenges in today’s computing. His goal is to develop theoretical ideas and tools that ultimately will change the way we approach optimization in all shapes and forms, both in theory and in practice. Most recently his primary research goal is making machine learning more robust from both a security and usability perspective. Madry and his team work on building tools for improving reliability and security of machine-learning models used for mission-critical applications.

“Part of my work is about algorithms and using optimization to solve real-world problems faster and more efficiently. The other part is trying to understand modern machine learning, trying to get the science of it, and trying to develop new methods of machine learning that make it more robust.”

Machine learning can be applied to many different industry sectors including healthcare, finance, transportation, self-driving cars and more. However, with the advancements in machine learning come problems with trust and security. Madry and his group hope to address these new challenges. The goal is to not only make machine learning work but also to understand it and then be able to rely on it.

“Any part of an industry that uses machine learning currently will need solutions that are robust and reliable. Anything that people use machine learning for, from healthcare and finance to transportation and even ads and content filtering systems are impacted. Anywhere machine learning seems like a good idea, making it robust and reliable is key.”

Madry’s team (madry-lab.ml) comprises a mix of graduate and undergraduate students and one can read about their work at gradientscience.org.

Part of my work is about algorithms and using optimization to solve real-world problems faster and more efficiently. The other part is in machine learning, trying to understand modern machine learning, trying to get the science of it, and trying to develop new methods of machine learning that make it more robust. Both from a security point of view and also just visibility in the real world.”
Aleksander Madry
Academic Achievements

Madry received his PhD from MIT in 2011. Afterward, he spent a year as a postdoctoral researcher at Microsoft Research New England and joined the faculty at Ecole Polytechnique Federale De Lausanne (EPFL) in Switzerland until 2015. He is currently an Associate Professor of Computer Science in the MIT EECS Department and a principal investigator at MIT CSAIL. His work spans topics ranging from developing new algorithms using continuous optimization, to combining theoretical and empirical insights to build a more principled and thorough understanding of key machine learning tools. A major theme of his research is rethinking machine learning from the perspective of security and robustness.