Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Executive Committee 
 Postdocs 
 Visitors 
 Students 
 Research 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 P/T Colloquia 
 Archive 
 Ulam Scholar 
 
 Postdoc Nominations 
 Student Requests 
 Student Program 
 Visitor Requests 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, April 03, 2019
11:00 AM - 12:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

The Adversarial Landscape: Satellite Imagery Forgery Detection and Counter-Forensic CNNs.

David Guera Cobo
Purdue University

In the first part of the talk, we address the problem of spotting manipulations in satellite imagery. Current satellite imaging technology enables shooting high-resolution pictures of the ground. As any other kind of digital images, overhead pictures can also be easily forged. In this work, we propose different methods for satellite image forgery detection and localization. Specifically, we consider the scenario in which pixels within a region of a satellite image are replaced to add or remove an object from the scene. Our approaches work under the assumption that no forged images are available for training. Using a generative adversarial network (GAN), we learn a feature representation of pristine satellite images. We also present initial results using a recently presented anomaly detection method, Deep Support Vector Data Description (Deep SVDD), which is trained on an anomaly detection-based objective. A one-class support vector machine (SVM) is trained on these features to determine their distribution. Finally, image forgeries are detected as anomalies. The proposed methods are validated against different kinds of satellite images containing forgeries of different size and shape. Additionally, we also describe the use of a Conditional Generative Adversarial Network (cGAN) to identify the presence of spliced forgeries within satellite images when knowledge of the forgeries is available.

In the second part of the talk, we examine the vulnerability problem of current convolutional neural networks to adversarial attacks. Due to the lack of robustness in the class boundaries of CNNs and the high-dimensionality of images and other types of data, neural networks are vulnerable to attacks with adversarial examples. In this work, we briefly explore the current state-of-the-art of adversarial attacks and defenses. We also present a concrete example, where the problem of identifying the device model or type that was used to take an image is studied and shown to be spoofable. We describe a counter-forensic method capable of subtly altering images to change their estimated device model when they are analyzed by any CNN-based device model detector. Our results show that even advanced deep learning architectures trained to analyze images and obtain device model information are still vulnerable to our proposed method.

NOTE: Future speaker nominations through the Information Science and Technology Institute (ISTI) are welcome and can be entered at: https://isti-seminar.lanl.gov/app/calendar.Hosted by the Information Science and Technology Institute (ISTI)

Host: Juston Moore