Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

BI 139 Marc Howard: Compressed Time and Memory

BI 139 Marc Howard: Compressed Time and Memory

FromBrain Inspired


BI 139 Marc Howard: Compressed Time and Memory

FromBrain Inspired

ratings:
Length:
80 minutes
Released:
Jun 20, 2022
Format:
Podcast episode

Description

Check out my free video series about what's missing in AI and Neuroscience






Support the show to get full episodes and join the Discord community.






Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.



Theoretical Cognitive Neuroscience Lab. Related papers:Memory as perception of the past: Compressed time in mind and brain.Formal models of memory based on temporally-varying representations.Cognitive computation using neural representations of time and space in the Laplace domain.Time as a continuous dimension in natural and artificial networks.DeepSITH: Efficient learning via decomposition of what and when across time scales.



0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles
Released:
Jun 20, 2022
Format:
Podcast episode

Titles in the series (100)

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.