Oriol Vinyals, who leads DeepMind's deep learning team, talks about AlphaCode, his group's code-writing language model, and DeepMind's winding road toward artificial general intelligence.
Tom Denton, a software engineer in Google's bioacoustics group, talks about new algorithms to separate individual bird songs from the cacophony of the forest - and gives some examples. The Eye on AI podcast is sponsored by ClearML, the MLOps solution.
Andrew Ng, founder of Google Brain, Coursera and Landing AI, talks about his vision of data-centric AI, MLOps and the future of supervised vs unsupervised learning. The Eye on AI podcast is sponsored by ClearML.
Tom Siebel, founder and CEO of C3.ai talks about AI projects including military target acquisition and precision healthcare while musing about the dark side of our technological future. The Eye on AI podcast is sponsored by Clear.ML
Max Bileschi, a software engineer at Google Research, talks about his team's application of convolutional neural networks to predict the function of amino acid sequences in a protein. Eye on AI is sponsored by Clear.ML.
Pushmeet Kohli, the head of DeepMind's AI for Science and one of the brains behind AlphaFold, the machine learning system that is helping solve the protein folding problem. The episode is sponsored by Clear.ML, an open-source MLOps solution.
Currently the largest AI system in the world is China's WuDao 2.0, a sparse, multimodal, large language model with 1.75 trillion parameters. Tang Jie, a professor at China's Qinghua University, who leads the WuDao team, talks about how the model was built, why it is unique and what his team plans for the future.
Connor Leahy, one of the minds behind Eleuther AI and its open-source large language model, GPT-J, talks about the building such models and their implications for the future.
Robert O. Work, former Deputy Secretary of Defense and recently co-chairman of the National Security Commission on AI, talks about competition between the US and China to integrate AI into their military capabilities.
Stephen DeAngelis, head of Enterra Solutions, reminds us that so-called Old-Fashioned AI continues to be a powerful tool. He talked about leveraging knowledge bases, inference engines and symbolic logic to make decisions about large dynamic systems.
Daniel Ho, associate director of Stanford's Institute for Human-Centered Artificial Intelligence talks about the proposed National AI Research Resource, an effort to expand the data and compute available to academic researchers, leveling the playing field with researchers in private companies.
Andrew Feldman, one of the founders and CEO of Cerebras Systems, talks about the company's wafer-scale computer chip optimized for machine learning and about the network of chips that company has built that has as much computing power as a human brain.
Adobe's head of research, Gavin Miller, talks about AI-enhanced creativity, guarding against manipulation of visual media and his own AI-enabled robot snakes.
Seth Dobrin, chief AI officer at IBM, talks about the company's tools to increase the trustworthiness, fairness and explainability of AI models.