> Sutskever "the brain behind the large language models that helped build ChatGPT" ...
Well, no ...
LLMs, i.e. today's AI, are all based on the Transformer architecture, designed by Jakob Uzkoreit, Noam Shazeer et al, at Google.
Sutskever, sitting at OpenAI, decided to play with what Google (Jacob, Noam) has designed, intrigued to see how much better it would get as it was scaled up.
ChatGPT - the first actually usable LLM was - came about by the addition RLHF, turning a purely statistical generator into one that was reasonable to interact with - following instructions, answering questions etc. RLHF seems to have been invented by Alec Radford, Dario Amodei (now of Anthropic), etc - Sutsekever's name isn't even on the paper ("Fine-Tuning Language Models from Human Preferences").