The Broken Shield - ICLR 2021

Please see our full paper here:


In this article we survey adversarial attack tools for face detection and recognition models (FDRMs) and present and analyze a case study of risks associated with FDRMs and adversarial tools. We develop a framework for user-centered design of adversarial tools and draw lessons on risk disclosure from U.S. product liability, consumer protection, and negligence law. Combining these, we develop a set of concrete recommendations to help FDRM adversarial tool makers decrease user harm through better communication around risk.