Why AI Doesn’t Understand Your Culture? Dr. Vered Shwartz on Cultural Bias in LLMs
Автор: Women in AI Research WiAIR
Загружено: 2025-10-29
Просмотров: 350
Are today’s AI systems truly global — or just Western by design? 🌍
In this episode of Women in AI Research, Jekaterina Novikova and Malikeh Ehgaghi speak with Dr. Vered Shwartz (Assistant Professor at @UBC and @CIFARVideo AI Chair at the @vectorinstituteai about the cultural blind spots in today’s large language and vision-language models.
Don't have time for the full episode? Watch it in parts:
Part 1 - "Lost in Automatic Translation": • Ep 12 Part 1 - "Lost in Automatic Translat...
Part 2 - Hidden Cultural Codes in AI: • Ep 12 Part 2 - Hidden Cultural Codes in AI...
Part 3 - When AI Can't "See" Your Culture: • Ep 12 Part 3 - When AI Can't "See" Your Cu...
CHAPTERS:
00:00 Introduction to Women in AI Research Podcast
00:33 Guest introduction - Dr. Vered Shwartz
02:32 The Importance of Communication Skills in Academia
04:15 Navigating Faculty Roles and Student Supervision
07:52 Personal Experiences with Language Technologies
14:39 Exploring Cultural Representation in AI
20:29 The InfoGap Method and Cultural Information Gaps
22:29 Technical Challenges in Cross-Language Representation
24:02 Cultural Completeness and Wikipedia's Role
26:42 User Interaction with Language Models
37:22 Cross-Cultural Evaluation of Social Norm Biases
38:16 Cultural Alignment of Language Models
49:11 Exploring Vision Language Models
01:02:51 Benchmarking Cultural Bias in AI
01:06:54 Decentralizing AI Development
01:12:01 Addressing Biases in AI Development
01:15:52 Future Directions in AI Research
REFERENCES:
01:10 Vered Shwartz Google Scholar profile (https://scholar.google.ca/citations?u...)
07:57 Book "Lost in Automatic Translation" (https://lostinautomatictranslation.com/)
19:25 Elevator Recognition, by The Scottish Comedy Channel ( • Elevator Recognition | Burnistoun )
20:33 Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia (https://arxiv.org/abs/2410.04282)
30:34 ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer (https://arxiv.org/abs/2502.21228)
34:23 WikiGap: Promoting Epistemic Equity by Surfacing Knowledge Gaps Between English Wikipedia and other Language Editions (https://arxiv.org/abs/2505.24195)
37:24 Is It Bad to Work All the Time? Cross-Cultural Evaluation of Social Norm Biases in GPT-4 (https://arxiv.org/abs/2505.18322)
38:39 Towards Measuring the Representation of Subjective Global Opinions in Language Models (https://arxiv.org/abs/2306.16388)
48:09 I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models (https://arxiv.org/abs/2306.03423)
50:43 From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models (https://arxiv.org/pdf/2407.00263)
01:10:51 CulturalBench: A Robust, Diverse, and Challenging
Cultural Benchmark by Human-AI CulturalTeaming (https://aclanthology.org/2025.acl-lon...)
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
♾️ https://women-in-ai-research.github.io
Follow us at:
♾️ LinkedIn: / women-in-ai-research
♾️ Bluesky: https://bsky.app/profile/wiair.bsky.s...
♾️ X (Twitter): https://x.com/WiAIR_podcast
#AI #NLP #LLMs #CulturalBias #WomenInAI #ExplainableAI #FairnessInAI #AIResearch #EthicalAI #wiair #wiairpodcast
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: