46 min listen
State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia
FromNo Priors: Artificial Intelligence | Technology | Startups
State Space Models and Real-time Intelligence with Karan Goel and Albert Gu from Cartesia
FromNo Priors: Artificial Intelligence | Technology | Startups
ratings:
Length:
34 minutes
Released:
Jun 27, 2024
Format:
Podcast episode
Description
This week on No Priors, Sarah Guo and Elad Gil sit down with Karan Goel and Albert Gu from Cartesia. Karan and Albert first met as Stanford AI Lab PhDs, where their lab invented Space Models or SSMs, a fundamental new primitive for training large-scale foundation models. In 2023, they Founded Cartesia to build real-time intelligence for every device. One year later, Cartesia released Sonic which generates high quality and lifelike speech with a model latency of 135ms—the fastest for a model of this class.
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @krandiash | @_albertgu
Show Notes:
(0:00) Introduction
(0:28) Use Cases for Cartesia and Sonic
(1:32) Karan Goel & Albert Gu’s professional backgrounds
(5:06) Steady State Models (SSMs) versus Transformer Based Architectures
(11:51) Domain Applications for Hybrid Approaches
(13:10) Text to Speech and Voice
(17:29) Data, Size of Models and Efficiency
(20:34) Recent Launch of Text to Speech Product
(25:01) Multimodality & Building Blocks
(25:54) What’s Next at Cartesia?
(28:28) Latency in Text to Speech
(29:30) Choosing Research Problems Based on Aesthetic
(31:23) Product Demo
(32:48) Cartesia Team & Hiring
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @krandiash | @_albertgu
Show Notes:
(0:00) Introduction
(0:28) Use Cases for Cartesia and Sonic
(1:32) Karan Goel & Albert Gu’s professional backgrounds
(5:06) Steady State Models (SSMs) versus Transformer Based Architectures
(11:51) Domain Applications for Hybrid Approaches
(13:10) Text to Speech and Voice
(17:29) Data, Size of Models and Efficiency
(20:34) Recent Launch of Text to Speech Product
(25:01) Multimodality & Building Blocks
(25:54) What’s Next at Cartesia?
(28:28) Latency in Text to Speech
(29:30) Choosing Research Problems Based on Aesthetic
(31:23) Product Demo
(32:48) Cartesia Team & Hiring
Released:
Jun 27, 2024
Format:
Podcast episode
Titles in the series (71)
How can we make sure that everyone has access to AI? Can small models outperform large models? With Stability AI’s Emad Mostaque: AI-generated images have been everywhere over the past year, but one company has fueled an explosive developer ecosystem around large image models: Stability AI. Stability builds open AI tools with a mission to improve humanity. Stability AI is most known for Stable Diffusion, the AI model where a user puts in a natural language prompt and the AI generates images. But they're also engaged in progressing models in natural language, voice, video, and biology. This week on the podcast, Emad Mostaque joins Sarah Guo and Elad Gil to talk about how this barely one-year-old, London-based company has changed the AI landscape, scaling laws, progress in different modalities, frameworks for AI safety and why the future of AI is open. by No Priors: Artificial Intelligence | Technology | Startups