Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Beyond Language: Inside a Hundred-Trillion-Token Video Model

Beyond Language: Inside a Hundred-Trillion-Token Video Model

FromAI + a16z


Beyond Language: Inside a Hundred-Trillion-Token Video Model

FromAI + a16z

ratings:
Length:
65 minutes
Released:
Jul 3, 2024
Format:
Podcast episode

Description

In this episode of the AI + a16z podcast, Luma Chief Scientist Jiaming Song joins a16z General Partner Anjney MIdha to discuss Jiaming's esteemed career in video models, culminating thus far in Luma's recently released Dream Machine 3D model that shows abilities to reason about the world across a variety of aspects. Jiaming covers the history of image and video models,  shares his vision for the future of multimodal models, and explains why he thinks Dream Machine demonstrates its emergent reasoning capabilities. In short: Because it was trained on a volume of high-quality video data that, if measured in relation to language data, would amount to hundreds of trillions of tokens.Here's a sample of the discussion, where Jiaming explains the "bitter lesson" as applied to training generative models, and in the process sums up a big component of why Dream Machine can do what it does by using context-rich video data:"For a lot of the problems related to artificial intelligence, it is often more productive in the long run to use methods that are simpler but use more compute, [rather] than trying to develop priors, and then trying to leverage the priors so that you can use less compute."Cases in this question first happened in language, where people were initially working on language understanding, trying to use grammar or semantic parsing, these kinds of techniques. But eventually these tasks began to be replaced by large language models. And a similar case is happening in the vision domain, as well . . . and now people have been using deep learning features for almost all the tasks. This is a clear demonstration of how using more compute and having less priors is good."But how does it work with language? Language by itself is also a human construct. Of course, it is a very good and highly compressed kind of knowledge, but it's definitely a lot less data than what humans take in day to day from the real world . . . "[And] it is a vastly smaller data set size than visual signals. And we are already almost exhausting the . . . high-quality language sources that we have in the world. The speed at which humans can produce language is definitely not enough to keep up with the demands of the scaling laws. So even if we have a world where we can scale up the compute infrastructure for that, we don't really have the infrastructure to scale up the data efforts . . ."Even though people would argue that the emergence of large language models is already evidence of the scaling law . . . against the rule-based methods in language understanding, we are arguing that language by itself is also a prior in the face of more of the richer data signal that is happening in the physical world."Learn more:Dream MachineJiaming's personal siteLuma careersThe bitter lessonFollow everyone on X:Jiaming SongAnjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Released:
Jul 3, 2024
Format:
Podcast episode

Titles in the series (16)

Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.