Draft:Andrew Ilyas

Computer scientist From Wikipedia, the free encyclopedia


Andrew Ilyas is a computer scientist and assistant professor in the Software and Societal Systems Department at Carnegie Mellon University.[1] His research focuses on reliable and predictable machine-learning systems.[2] He has co-authored work on adversarial examples, data attribution in machine learning, and defenses against malicious AI image editing.[3][4][5][6]

Education and career

Ilyas attended the Massachusetts Institute of Technology as an undergraduate, majoring in computer science and mathematics.[2] Before joining Carnegie Mellon, he was a Stein Fellow in Stanford University's Department of Statistics and a PhD student at MIT, where he was advised by Constantinos Daskalakis and Aleksander Madry.[2]

Research

Ilyas was a co-author of Synthesizing Robust Adversarial Examples, presented at ICML 2018, which described robust 3D adversarial objects fabricated in the physical world.[3] Reporting on the work in 2017, Fast Company described the project as showing that machine-learning systems could be fooled by three-dimensional objects in the real world.[7] His 2019 paper Adversarial Examples Are Not Bugs, They Are Features argued that adversarial examples can arise from non-robust but predictive features in data.[4] The paper and related work were covered by Wired and Science.[8][9]

In 2022, Ilyas co-authored Datamodels: Predicting Predictions from Training Data, a paper on analyzing model behavior in terms of training data.[5] He also co-authored Raising the Cost of Malicious AI-Powered Image Editing (2023), part of the research behind the PhotoGuard system for resisting AI-based image manipulation.[6] The PhotoGuard work was covered by PetaPixel and VentureBeat.[10][11]

Recognition

In 2025, Ilyas received the George M. Sprowls PhD Thesis Award in Artificial Intelligence and Decision Making from MIT EECS.[12]

Selected publications

  • Synthesizing Robust Adversarial Examples (2018)[3]
  • Adversarial Examples Are Not Bugs, They Are Features (2019)[4]
  • Datamodels: Predicting Predictions from Training Data (2022)[5]
  • Raising the Cost of Malicious AI-Powered Image Editing (2023)[6]

References

Related Articles

Wikiwand AI