Probably Private
Probably Private is a channel for privacy and data science, machine learning and AI enthusiasts. Whether you are here to learn about data science/ML/AI via the lens of privacy, or the other way around, this channel aims to be an open conversation on technical and social aspects of privacy and its intersections with surveillance, law, technology, mathematics and probability.
The Probably Private Newsletter (see link!) gives updates in written form on similar topics, so subscribe there to get emails regularly and to learn in more depth.
Katharine Jarmul is an internationally recognized author, lecturer and privacy activist/researcher, who has been working in the field of machine learning and large-scale data analysis/processing for the past 15 years. Her most recent book, Practical Data Privacy, has been translated into in 3 languages and is available from your favorite book resellers.
Attacking AI Models: Context Window Stuffing
AI Security Mini-Course: Prompt Sanitization with Microsoft Presidio
Differential Privacy in Deep Learning and AI
Extracting Training Data from Diffusion Models
5 Steps to Build an AI Security Strategy at your organization
AI Red Teaming: Exploring Embeddings and Training Data as an Adversary
Career Advice: Find a great boss!
AI Red Teaming Mini-Course: Building Adversarial Examples
Addressing Memorization in AI/ML: Attacks on Machine Unlearning
Red Teaming AI Systems: Advanced Prompt Engineering with Multimodal Inputs
Five Things I Wish I Had Known Getting Started in AI / ML Privacy and Security
Red Teaming Mini Course: Attacking LLMs with other LLMs (Crescendo Attack)
Jailbreak AI models with Prompt Engineering
Learn Hands-On AI Security & Red Teaming
How does machine unlearning work?
What is Machine Unlearning?
What is Machine Forgetting?
Let's talk about Information Theory! (Privacy and Machine Learning/AI)
Getting started with Local AI tools
What is Privacy Engineering? With Tariq Yusuf
What are AI guardrails? How do they work?
Common AI privacy mistakes and what to do instead
Attacking AI and deep learning models to extract sensitive data (Privacy Attacks on AI/ML systems)
Introduction to Priveedly: Running your own personalized content engine
Training your own personalized and private content recommender using scikit-learn and Priveedly
What Adversarial Machine Learning Teaches us about AI Memorization
How could we have known about AI memorization? Exploring differential privacy in deep learning.
How AI/ML memorization happens: Overparameterized models
AI Memorization: How and why novel examples are memorized in deep learning systems
How AI / ML Memorization Happens: Repetition