“Elon Musk is setting out to understand the true nature of the universe with his newest company, xAI.
The team will talk about the new venture on Friday, July 14, during a Twitter Spaces.
Here's a breakdown of each team member and their contributions to artificial intelligence:
Elon Musk
Elon Musk, co-founder of groundbreaking companies like SpaceX and Tesla, has been instrumental in advancing the field of artificial intelligence, particularly through his contributions to OpenAI. His belief in the transformative potential of AI, coupled with his advocacy for ethical regulations, has fostered both technological innovation and the ongoing discourse around safe and responsible AI usage.
Igor Babuschkin
Igor Babuschkin, a distinguished AI researcher, holds an impressive portfolio of experience at leading tech companies like DeepMind and OpenAI. Handpicked by Elon Musk for his expertise, Babuschkin has been engaged in developing a rival chatbot to OpenAI's ChatGPT, which he sees as being trained to be "woke".
Manuel Kroiss
Manuel Kroiss, an accomplished software engineer, has leveraged his skills at tech giants such as Google and DeepMind, significantly contributing to advancements in reinforcement learning and artificial intelligence. Most recently, as co-author of the paper "Reverb: A Framework for Experience Replay," Kroiss helped introduce a highly efficient and customizable system for experience replay in reinforcement learning, furthering the state-of-the-art in the field.
Yuhuai (Tony) Wu
Yuhuai (Tony) Wu is an accomplished computer scientist and AI researcher, known for his work on automated mathematicians and formal reasoning at Google's N2Formal team and a stealth startup. His influential research, recognized with accolades at ICLR and coverage in outlets like the New York Times and Quanta Magazine, centers on developing machines capable of human-like reasoning, particularly in mathematics.
Christian Szegedy
Christian Szegedy is a highly accomplished research scientist with extensive expertise in deep learning, artificial intelligence, computer vision, video analysis, and formal reasoning, currently serving as a Staff Research Scientist at Google. His educational background includes a Ph.D. in Applied Mathematics from the University of Bonn, where he studied matching theory, graph coloring, and developed optimization algorithms to solve complex problems in electronic design automation (EDA).
Jimmy Ba
Jimmy Ba is an Assistant Professor at the University of Toronto, where he leads research on the development of efficient learning algorithms for deep neural networks, with a focus on building general problem-solving machines that mirror human-like efficiency and adaptability. His extensive educational and research experience includes a PhD under the supervision of Geoffrey Hinton, and he holds positions as a CIFAR AI Chair and a recipient of the Facebook Graduate Fellowship 2016 in machine learning.
Toby Pohlen
Toby Pohlen, previously a Staff Research Engineer at Google DeepMind for over six years, has extensive experience in machine learning and reinforcement learning, including work on major projects like the AlphaStar League and Ape-X DQfD. With a stellar academic record at RWTH Aachen University, where he graduated at the top of his class in Computer Science, he has made significant contributions in areas such as computer vision, image processing, numerical analysis, and probability theory.
Ross Nordeen
Ross Nordeen, a technical program manager from Tesla who managed the influx of new hires and access requests at Twitter during Elon Musk's overhaul, is now part of xAI. He aims to help humanity pass the great filters, achieve a Type 1 status on the Kardashev scale, and launch von Neumann probes to explore the galaxy.
Kyle Kosic
Kyle Kosic, an accomplished full-stack site reliability engineer and data scientist, has a rich academic background in Machine Learning, Physics, and Applied Mathematics and a diverse professional experience spanning OpenAI, OnScale, Lucena Research, Wells Fargo, and IntegriChain. Known for his proficiency in a wide array of technical skills, including various programming languages, DevOps technologies, and AWS services, Kosic's work is marked by his passion for automation, scalability, distributed computing, and effective application of machine learning models in various domains.
Greg Yang
Greg Yang, honored with the Morgan Prize Honorable Mention in 2018, has spent over five years with Microsoft Research, where he made significant contributions, including the development of the theory of Tensor Programs and the practical scaling of neural networks. Known for his late-night eureka moments in Building 99, Yang, who joined Microsoft Research directly after undergraduate studies, expresses immense gratitude for the opportunities and growth experienced during his tenure at the company.
Guodong Zhang
Guodong Zhang, a highly distinguished researcher in machine learning and artificial intelligence, is based at the University of Toronto and the Vector Institute in Canada. Known for his focus on training, tuning, and aligning large language models, Zhang has made considerable contributions to the field, demonstrated through his extensive publication record. His impactful work ranges from optimization studies, such as 'Multi-agent Optimization or its Application', to deep learning research like 'Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers', to Bayesian deep learning, with 'Differentiable Annealed Importance Sampling and the Perils of Gradient Noise'. Zhang's academic excellence has been recognized with several honors including the Apple PhD Fellowship in 2022 and the Borealis AI Fellowship in 2020, underlining his significant potential in the field of AI and machine learning.
Zihang Dai
Zihang Dai is a Research Scientist at Google, with academic degrees from Tsinghua University and Carnegie Mellon University, and prior research internships at Baidu USA and MILA at the University of Montreal. His notable contributions at Google include generalized language pretraining with XLNet and efficient language processing with Funnel-Transformer, with his research interests focusing on deep learning for natural language processing and energy-based generative adversarial networks.”