Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)


Localizing and Editing Knowledge in LLMs with Peter Hase - #679

FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

ratings:
Length:
50 minutes
Released:
Apr 8, 2024
Format:
Podcast episode

Description

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.
Released:
Apr 8, 2024
Format:
Podcast episode

Titles in the series (100)

This Week in Machine Learning & AI is the most popular podcast of its kind. TWiML & AI caters to a highly-targeted audience of machine learning & AI enthusiasts. They are data scientists, developers, founders, CTOs, engineers, architects, IT & product leaders, as well as tech-savvy business leaders. These creators, builders, makers and influencers value TWiML as an authentic, trusted and insightful guide to all that’s interesting and important in the world of machine learning and AI. Technologies covered include: machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, deep learning and more.