Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Exploring AI with Emily M. Bender

Exploring AI with Emily M. Bender

FromDigital Citizen


Exploring AI with Emily M. Bender

FromDigital Citizen

ratings:
Length:
32 minutes
Released:
Jun 25, 2024
Format:
Podcast episode

Description

Join us on a journey to learn more about the intersection of linguistics and AI with special guest Emily M. Bender. Come with us as we learn how linguistics functions in modern language models like ChatGPT. Episode Notes Discover the origins of language models, the negative implications of sourcing data to train these technologies, and the value of authenticity. ▶️ Guest Interview - Emily M. Bender Learn more about Emily M. Bender Read On the Danger of Stochastic Parrots(2021) by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. Check out the publications by cognitive scientist Abeba Birhane. See work from AI research scientist Meg Mitchell. ?️ Discussion Points Emily M. Bender is a Professor of Linguistics at the University of Washington. Her work focuses on grammar, engineering, and the societal impacts of language technology. She's spoken and written about what it means to make informed decisions about AI and large language models such as ChatGPT. Artificial Intelligence (AI) is a marketing term developed in the 1950s by John McCarthy. It refers to an area of computer science. AI is a technology built using natural language processing and linguistics, the science of how language works. Understanding how language works is necessary to comprehend large language models' potential misuse and limitations. Language model is the term for a type of technology designed to model the distribution of word forms in text. While early language models simply determined the relative frequency of words in a text, today’s language models are bigger in terms of the data they store and the language they are trained on. As a society, we must continue reminding ourselves that synthetic text is not a credible information source. Before sharing information, it’s smart to verify that something was written by a human rather than a machine. Valuing authenticity and citations are some of the most important things we can do. Distributional biases are generated in the data output used for large language models. The less care we put into curating training data, the more various patterns and systems of oppression will be reproduced, regardless of whether they are presented as fact or fiction in the end result.  Being a good digital citizen means avoiding using products built on data theft and labor exploitation. On an individual level, we should insist on transparency regarding synthetic media. Part of the problem is that there is currently no watermarking at the source. There is a major need for regulation and accountability around synthetic text nationally. We can also continue to increase the value of authenticity. ? Find Us Digital Citizen Website: fastmail.com/digitalcitizen. Check out our blog. Tweet us @Fastmail. Follow us on Mastodon: @fastmail@mastodon.social. ? Review Us If you love this show, please leave us a review on Apple Podcasts or wherever you listen to podcasts. Take our survey to tell us what you think at digitalcitizenshow.com/survey.
Released:
Jun 25, 2024
Format:
Podcast episode

Titles in the series (24)

Live your best digital life with Fastmail. Subscribe to Digital Citizen and listen to Fastmail CTO Ricardo Signes talk to great thinkers about the digital world. Learn how to be a more responsible digital citizen and make the Internet a better place.