83 min listen
Universal Jailbreaks with Zico Kolter, Andy Zou, and Asher Trockman
From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Universal Jailbreaks with Zico Kolter, Andy Zou, and Asher Trockman
From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
ratings:
Length:
137 minutes
Released:
Sep 22, 2023
Format:
Podcast episode
Description
In this episode, Nathan sits down with three researchers at Carnegie Mellon studying adversarial attacks and mimetic initialization: Zico Kolter, Andy Zou, and Asher Trockman. They discuss: the motivation behind researching universal adversarial attacks on language models, how the attacks work, and the short term harms and long term risks of these jailbreaks. If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive
TIMESTAMPS:
[00:00:00] - Introducing the podcast and guests Zico Kolter, Andy Zou, and Asher Trockman
[00:06:32] - Discussing the motivation and high-level strategy for the universal adversarial attack on language models
[00:09:33] - Explaining how the attacks work by adding nonsense tokens to maximize target sequence probability
[00:11:06] - Comparing to prior adversarial attacks in vision models
[00:13:47] - Details on the attack optimization process and discrete token search
[00:17:09] - The empirical notion of "mode switching" in the language models
[00:21:18] - Technical details on gradient computation across multiple models and prompts
[00:23:46] - Operating in one-hot vector space rather than continuous embeddings
[00:25:50] - Evaluating candidate substitutions across all positions to find the best update
[00:28:05] - Running the attack optimization for hundreds of steps across multiple GPUs
[00:39:14] - The difficulty of understanding the loss landscape and internal model workings
[00:43:55] - The flexibility afforded by separating the loss and optimization approach
[00:48:16] - The challenges of creating inherently robust models via adversarial training
[00:52:34] - Potential approaches to defense through filtering or inherent model robustness
[00:55:51] - Transferability results to commercial models like GPT-4 and Claude
[00:59:25] - Hypotheses on why the attacks transfer across different model architectures
[01:04:36] - The mix of human-interpretable and nonsense features in effective attacks
[01:08:29] - The appearance of intuitive manual jailbreak triggers in some attacks
[01:15:33] - Short-term harms of attacks vs long-term risks
[01:18:37] - Influencing those with incomplete understanding of LLMs to appreciate differences from human reasoning
[01:24:16] - Mitigating risks by training on filtered datasets vs broad web data
[01:2916] - Curriculum learning as a strategy for both capability and safety
[01:30:35] - Influencing developers building autonomous systems with LLMs
[01:33:19] - Alienness of LLM failure modes compared to human reasoning
[01:35:45] - Getting inspiration from biological visual system structure
[01:40:35] - Initialization as an alternative to pretraining for small datasets
[01:51:41] - Encoding useful structures like grammars in initialization without training
[02:12:10] - Most ideas don't progress to research projects
[02:13:02] - Pursuing ideas based on interest and feasibility
[02:15:14] - Fun of exploring uncharted territory in ML research
LINKS:
Adversarial Attacks Paper: https://arxiv.org/abs/2307.15043
Mimetic Initialization on Self-Attention Layers: https://arxiv.org/pdf/2305.09828.pdf
X/Social:
@zicokolter (Zico Kolter)
@andyzou_jiaming (Andy Zou)
@ashertrockman (Asher Trockman)
@CogRev_podcast
SPONSORS: NetSuite | Omneky
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
Music Credit: Stableaudio.com
TIMESTAMPS:
[00:00:00] - Introducing the podcast and guests Zico Kolter, Andy Zou, and Asher Trockman
[00:06:32] - Discussing the motivation and high-level strategy for the universal adversarial attack on language models
[00:09:33] - Explaining how the attacks work by adding nonsense tokens to maximize target sequence probability
[00:11:06] - Comparing to prior adversarial attacks in vision models
[00:13:47] - Details on the attack optimization process and discrete token search
[00:17:09] - The empirical notion of "mode switching" in the language models
[00:21:18] - Technical details on gradient computation across multiple models and prompts
[00:23:46] - Operating in one-hot vector space rather than continuous embeddings
[00:25:50] - Evaluating candidate substitutions across all positions to find the best update
[00:28:05] - Running the attack optimization for hundreds of steps across multiple GPUs
[00:39:14] - The difficulty of understanding the loss landscape and internal model workings
[00:43:55] - The flexibility afforded by separating the loss and optimization approach
[00:48:16] - The challenges of creating inherently robust models via adversarial training
[00:52:34] - Potential approaches to defense through filtering or inherent model robustness
[00:55:51] - Transferability results to commercial models like GPT-4 and Claude
[00:59:25] - Hypotheses on why the attacks transfer across different model architectures
[01:04:36] - The mix of human-interpretable and nonsense features in effective attacks
[01:08:29] - The appearance of intuitive manual jailbreak triggers in some attacks
[01:15:33] - Short-term harms of attacks vs long-term risks
[01:18:37] - Influencing those with incomplete understanding of LLMs to appreciate differences from human reasoning
[01:24:16] - Mitigating risks by training on filtered datasets vs broad web data
[01:2916] - Curriculum learning as a strategy for both capability and safety
[01:30:35] - Influencing developers building autonomous systems with LLMs
[01:33:19] - Alienness of LLM failure modes compared to human reasoning
[01:35:45] - Getting inspiration from biological visual system structure
[01:40:35] - Initialization as an alternative to pretraining for small datasets
[01:51:41] - Encoding useful structures like grammars in initialization without training
[02:12:10] - Most ideas don't progress to research projects
[02:13:02] - Pursuing ideas based on interest and feasibility
[02:15:14] - Fun of exploring uncharted territory in ML research
LINKS:
Adversarial Attacks Paper: https://arxiv.org/abs/2307.15043
Mimetic Initialization on Self-Attention Layers: https://arxiv.org/pdf/2305.09828.pdf
X/Social:
@zicokolter (Zico Kolter)
@andyzou_jiaming (Andy Zou)
@ashertrockman (Asher Trockman)
@CogRev_podcast
SPONSORS: NetSuite | Omneky
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
Music Credit: Stableaudio.com
Released:
Sep 22, 2023
Format:
Podcast episode
Titles in the series (100)
E6: The Computer Vision Revolution with Junnan Li and Dongxu Li of BLIP and BLIP2: As recently as January 2021, the challenge of "interpreting what is going on in a photograph" was considered "nowhere near solved." Today's guests Junnan Li and Dongxu Li changed that with their publication and open-sourcing of BLIP, which delivered state-of-the-art performance on image captioning and other vision-language tasks. BLIP became the #18 most-cited AI paper of 2022, and now Junnan and Dongxu are back with BLIP-2, this time showing how small models can harness the power of existing foundation models to do multi-modal tasks. We talked to Junnan and Dongxu about their research and how they see the trend toward connector models shaping the future. We talked to Junnan and Dongxu about their research and how they see the trend toward connector models shaping the future. by "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis