Dataset Poisoning on the Industrial Scale
Автор: Google TechTalks
Загружено: 2021-06-04
Просмотров: 6457
A Google TechTalk, 2020/7/29, presented by Tom Goldstein, University of Maryland
ABSTRACT: Dataset poisoning is a security vulnerability in which a bad actor modifies the training data for a machine learning system in a way that allows them to control test time behavior. In this talk, I discuss our recent work on "clean-label" data poisoning methods, in which poison images appear normal to a human, and are labeled correctly. I present several ways to create such poisoning attacks, and show that they can be made effective against black-box industrial systems, including Google AutoML.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: