Alec Radford
American AI researcher (born 1993)
From Wikipedia, the free encyclopedia
Alec Radford is an American artificial intelligence researcher.
- OpenAI (2016–2024)
Alec Radford | |
|---|---|
| Born | April 1993 |
| Alma mater | Olin College |
| Occupation | Computer scientist, researcher |
| Employer |
|
| Website | newmu |
Biography
Radford grew up in Texas.[1] He attended Cistercian Preparatory School, graduating in 2011.[2] While in high school, he became an Eagle Scout.[3] He attended Olin College, a college of about 400 students located outside of Boston. There he and fellow students Slater Victoroff, Diana Yuan, and Madison May formed the startup Indico in their dorm room. Radford dropped out of college in August 2014. They were joined by Luke Metz in 2015.[1] That year Indico and the Facebook AI research lab in New York, used Generative adversarial networks, where a system creates fake data to fool another part of it into taking it for real training data, to create realistic low pixel images.[4] In April 2016, Jensen Huang, the chief executive of chipmaker Nvidia, gave one of the first public demonstrations of generative artificial intelligence. He showed that with a simple text prompt one could create realistic images, like Romantic-era oil paintings. He stated that it was "from Yann LeCun's laboratory" but the actual research that the technology was rooted in was done by Indico. The lack of recognition for the team's work, according to Victoroff "gutted us."[1]
Radford joined OpenAI around 2016.[5] There he worked on natural-language processing. In 2017, Radford trained a neural network on Amazon reviews. The model was fairly basic, with layers which allowed for human understanding. Upon exploring it, he saw that it had a special neuron linked to the sentiment of the reviews, which it had created on its own. This was a drastic improvement from previous neural networks that had analysed sentiment, because they had to be told to do so and specially trained on data that was explicitly labeled according to sentiment. This development made OpenAI chief scientist Ilya Sutskever consider that a future model, using more diverse language data, could map far more structures of meaning, eventually becoming a "learned core module" for superintelligence.[6]
In 2018, Radford was the lead author on OpenAI's seminal research paper on generative pre-trained transformers, which form the foundation of ChatGPT.[5] At OpenAI, he worked on early GPT models, Whisper, a speech recognition model, and the image generator DALL-E. He left OpenAI in December 2024 to pursue independent research.[5]
Around March 2025, Radford joined Thinking Machines Lab as an advisor. He joined along with Bob McGrew who was previously the chief research officer of OpenAI.[7]