Fooling Computer Vision Classifiers with Adversarial Examples
Navy SBIR 2018.2 - Topic N182-127 ONR - Ms. Lore-Anne Ponirakis - [email protected] Opens: May 22, 2018 - Closes: June 20, 2018 (8:00 PM ET)
TECHNOLOGY AREA(S):
Information Systems, Sensors ACQUISITION PROGRAM: Various
NAVAIR drone programs that committed to airborne computer vision (STUAS, BAMS),
Locust INP The technology within this
topic is restricted under the International Traffic in Arms Regulation (ITAR),
22 CFR Parts 120-130, which controls the export and import of defense-related
material and services, including export of sensitive technical data, or the
Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls
dual use items. Offerors must disclose any proposed use of foreign nationals
(FNs), their country(ies) of origin, the type of visa or work permit possessed,
and the statement of work (SOW) tasks intended for accomplishment by the FN(s)
in accordance with section 3.5 of the Announcement. Offerors are advised
foreign nationals proposed to perform on this topic may be restricted due to
the technical data under US Export Control Laws. OBJECTIVE: Identify effective
techniques to change the appearance of objects and thereby cause computer
vision classifiers to misclassify those objects. These techniques will modify
the appearance of the physical objects themselves rather than images of those
objects. A set of techniques will be identified and tested against multiple
state-of-the-art classifiers using images from multiple viewpoints around the
target objects. DESCRIPTION: The goal of this
SBIR topic is to better understand the mechanisms with which one can trick
computer vision classifiers in order to anticipate and counter enemy efforts at
camouflaging and misguiding our classifiers. For example, a unit may want to confuse
enemy surveillance into thinking that a Honda Civic might be a tank or vice
versa. Deep neural networks are achieving recognition of familiar objects with
near certainty that might otherwise be unrecognizable to human eyes.
Nevertheless, there are some interesting differences between the ways human
vision classifies entities and the ways that classifiers do. Certain
perturbations to the images can result in otherwise highly effective
classifiers to lose robustness on large-scale datasets. Similar changes to the
objects themselves may also impact classifier performance and act as �computer
vision camouflage.� PHASE I: Determine
feasibility for the development of operationally relevant techniques for
fooling computer vision classifiers. Conduct a detailed analysis of literature
and commercial capabilities. Assess which known techniques for image
modification could be applied to this project of physical object modification.
Identify the target objects and computer vision classifiers to be used for
testing the effectiveness of camouflage techniques. Produce Phase II plans with
a technology roadmap, development milestones, and projected Phase II achievable
performance. PHASE II: Develop physical
modifications for fooling computer vision classifiers and evaluate their
effectiveness against the classifiers identified in Phase I. Attempt to
identify root causes for the misclassifications. Propose and test improvements
to the classifier algorithms to counter the physical modifications. Phase II
deliverables will include five different physical modification kits for
aircraft and vehicles for applying computer vision deception techniques, test
results for the effectiveness of both physical modifications and algorithmic
improvements, relevant source code for any algorithmic improvements, and a
demonstration using a scenario of interest. The demonstration scenario would
include analyzing a target object both before and after modification, and show
that the classifier reliability shifts from one decision to another. The
computer vision classifier to be used will be determined by the offeror. PHASE III DUAL USE
APPLICATIONS: Transition physical modification kits to operational units and
vehicles. The proposer should provide a means for performance evaluation with
metrics for analysis (e.g., effectiveness of physical modifications) and method
for operator assessment of product interactions (e.g., ease of use).
Collaborate with private sector providers of computer vision products.
Developed technology would be relevant to the mixed reality gaming market as it
would allow environments seen through special glasses to be varied easily from
game to game. REFERENCES: 1. Kurakin, Alexey,
Goodfellow, Ian J., and Bengio, Samy. �Adversarial examples in the physical
world.� 2016. Proceedings of the International Conference on Learning
Representations, 2017. https://arxiv.org/pdf/1607.02533.pdf 2. Goodfellow, Ian J.,
Shlens, Jonathon, and Szegedy, Christian. �Explaining and harnessing
adversarial examples.� In Proceedings of the International Conference on
Learning Representations (ICLR), 2015. https://arxiv.org/pdf/1412.6572.pdf 3. Carlini, Nicholas and Wagner,
David. �Towards evaluating the robustness of neural networks.� IEEE Symposium
on Security & Privacy, 2017. https://arxiv.org/pdf/1608.04644.pdf 4. Nguyen, A., Yosinski, J.,
and Clune, J. �Deep Neural Networks are Easily Fooled: High Confidence
Predictions for Unrecognizable Images�. Computer Vision and Pattern Recognition
(CVPR '15), IEEE, 2015. https://arxiv.org/pdf/1412.1897.pdf 5. Luo, Yan, Boix, Xavier,
Roig, Gemma, Poggio, Tomaso, and Zhao, Qi. �Foveation-based mechanisms
alleviate adversarial examples.� 2016. https://arxiv.org/pdf/1511.06292.pdf KEYWORDS: Computer Vision;
Algorithm Warfare; Artificial Intelligence; Deception; Deep Learning; Decoys
|