Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

FromMachine Learning Street Talk (MLST)


#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

FromMachine Learning Street Talk (MLST)

ratings:
Length:
51 minutes
Released:
Mar 12, 2022
Format:
Podcast episode

Description

Today we are speaking with Dr. Thomas Lux, a research scientist at Meta in Silicon Valley. 

In some sense, all of supervised machine learning can be framed through the lens of geometry. All training data exists as points in euclidean space, and we want to predict the value of a function at all those points. Neural networks appear to be the modus operandi these days for many domains of prediction. In that light; we might ask ourselves — what makes neural networks better than classical techniques like K nearest neighbour from a geometric perspective. Our guest today has done research on exactly that problem, trying to define error bounds for approximations in terms of directions, distances, and derivatives.  

The insights from Thomas's work point at why neural networks are so good at problems which everything else fails at, like image recognition. The key is in their ability to ignore parts of the input space, do nonlinear dimension reduction, and concentrate their approximation power on important parts of the function. 

[00:00:00] Intro to Show
[00:04:11] Intro to Thomas (Main show kick off)
[00:04:56] Interpolation of Sparse High-Dimensional Data
[00:12:19] Where does one place the basis functions to partition the space, the perennial question
[00:16:20] The sampling phenomenon -- where did all those dimensions come from?
[00:17:40] The placement of the MLP basis functions, they are not where you think they are
[00:23:15] NNs only extrapolate when given explicit priors to do so, CNNs in the translation domain
[00:25:31] Transformers extrapolate in the permutation domain
[00:28:26] NN priors work by creating space junk everywhere
[00:36:44] Are vector spaces the way to go? On discrete problems
[00:40:23] Activation functioms
[00:45:57] What can we prove about NNs? Gradients without backprop

Interpolation of Sparse High-Dimensional Data [Lux]
https://tchlux.github.io/papers/tchlux-2020-NUMA.pdf

A Spline Theory of Deep Learning [_Balestriero_]
https://proceedings.mlr.press/v80/balestriero18b.html

Gradients without Backpropagation ‘22
https://arxiv.org/pdf/2202.08587.pdf
Released:
Mar 12, 2022
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.