Draft:Andrew Ilyas
Computer scientist
From Wikipedia, the free encyclopedia
Andrew Ilyas is a computer scientist and assistant professor in the Software and Societal Systems Department at Carnegie Mellon University.[1] His research focuses on reliable and predictable machine-learning systems.[2] He has co-authored work on adversarial examples, data attribution in machine learning, and defenses against malicious AI image editing.[3][4][5][6]
Submission declined on 17 April 2026 by Pythoncoder (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
Education and career
Ilyas attended the Massachusetts Institute of Technology as an undergraduate, majoring in computer science and mathematics.[2] Before joining Carnegie Mellon, he was a Stein Fellow in Stanford University's Department of Statistics and a PhD student at MIT, where he was advised by Constantinos Daskalakis and Aleksander Madry.[2]
Research
Ilyas was a co-author of Synthesizing Robust Adversarial Examples, presented at ICML 2018, which described robust 3D adversarial objects fabricated in the physical world.[3] Reporting on the work in 2017, Fast Company described the project as showing that machine-learning systems could be fooled by three-dimensional objects in the real world.[7] His 2019 paper Adversarial Examples Are Not Bugs, They Are Features argued that adversarial examples can arise from non-robust but predictive features in data.[4] The paper and related work were covered by Wired and Science.[8][9]
In 2022, Ilyas co-authored Datamodels: Predicting Predictions from Training Data, a paper on analyzing model behavior in terms of training data.[5] He also co-authored Raising the Cost of Malicious AI-Powered Image Editing (2023), part of the research behind the PhotoGuard system for resisting AI-based image manipulation.[6] The PhotoGuard work was covered by PetaPixel and VentureBeat.[10][11]
Recognition
In 2025, Ilyas received the George M. Sprowls PhD Thesis Award in Artificial Intelligence and Decision Making from MIT EECS.[12]

LLM-generated pages with the below issues may be deleted without notice.
These tools are prone to specific issues that violate our policies:
Instead, only summarize in your own words a range of independent, reliable, published sources that discuss the subject.
See the advice page on large language models for more information.