Alice Xiang
Global head of governance at Sony
From Wikipedia, the free encyclopedia
Alice Xiang is a lawyer, statistician and global head of AI governance at Sony AI, and was named by Nature as one of ten scientists to watch in 2026.[1] Xiang previously worked at the Partnership on AI.
Alice Xiang | |
|---|---|
| Awards | Privacy Papers for Policymakers Award |
| Academic background | |
| Alma mater | University of Oxford, Yale University, Harvard University |
| Academic work | |
| Institutions | Sony AI, Partnership on AI |
Education and career
Xiang studied at Yale Law School, and earned a master's in development economics from the University of Oxford, and both a bachelor's in economics and a master's in statistics from Harvard.[2]
In 2021, Women in AI Ethics included Xiang as one of their 100 Brilliant Women in AI Ethics.[3] Xiang received the Privacy Papers for Policymakers Award from the Future of Privacy Forum in 2025, for her essay Mirror, Mirror, on the Wall, Who's the Fairest of Them All?.[4]
Xiang is global head of AI governance at Sony AI, and was named by Nature as one of ten scientists to watch in 2026.[1][5] Xiang led the development of a dataset of more than 10,000 ethical images of humans, named the Fair Human-Centric Image Benchmark (FHIBE), which was collected in a manner "that reflects diversity, mitigates bias, protects intellectual-property rights and includes consent".[6]
Previously she worked for the Partnership on AI, where she was head of fairness, transparency, and accountability research. Xiang also held the role of visiting scholar at Tsinghua University. Xiang was chair of the ACM Conference on Fairness, Accountability, and Transparency.[7]
Selected works
- Alice Xiang; Jerone T. A. Andrews; Rebecca L. Bourke; et al. (5 November 2025). "Fair human-centric image dataset for ethical AI benchmarking". Nature. 648 (8092): 97–108. doi:10.1038/S41586-025-09716-2. ISSN 1476-4687. Wikidata Q137685229.
- Wiebke (Toussaint) Hutiri; Orestis Papakyriakopoulos; Alice Xiang (5 June 2024), Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators, Association for Computing Machinery, pp. 359–376, arXiv:2402.01708, doi:10.1145/3630106.3658911, Wikidata Q131164162
- Bhatt, Umang; Xiang, Alice; Sharma, Shubham; Weller, Adrian; Taly, Ankur; Jia, Yunhan; Ghosh, Joydeep; Puri, Ruchir; Moura, José M. F.; Eckersley, Peter (27 January 2020). "Explainable machine learning in deployment". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM. pp. 648–657. arXiv:1909.06342. doi:10.1145/3351095.3375624. ISBN 978-1-4503-6936-7.
- Bhatt, Umang; Antorán, Javier; Zhang, Yunfeng; Liao, Q. Vera; Sattigeri, Prasanna; Fogliato, Riccardo; Melançon, Gabrielle; Krishnan, Ranganath; Stanley, Jason; Tickoo, Omesh; Nachman, Lama; Chunara, Rumi; Srikumar, Madhulika; Weller, Adrian; Xiang, Alice (21 July 2021). "Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty". Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM. pp. 401–413. doi:10.1145/3461702.3462571. ISBN 978-1-4503-8473-5.
- Andrus, McKane; Spitzer, Elena; Brown, Jeffrey; Xiang, Alice (3 March 2021). "What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM. pp. 249–260. doi:10.1145/3442188.3445888. ISBN 978-1-4503-8309-7.
- "Mirror, Mirror, on the Wall, Who's the Fairest of Them All?". Daedalus. 28 February 2024. Retrieved 4 January 2026 – via American Academy of Arts and Sciences.