Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks

Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks
Author :
Publisher :
Total Pages :
Release :
ISBN-10 : OCLC:1198401026
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks by : Qinglong Wang

Download or read book Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks written by Qinglong Wang and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: "Recent years witnessed the successful resurgence of neural networks through the lens of deep learning research. As the spread of deep neural network (DNN) continues to reach multifarious branches of research, including computer vision, natural language processing, and malware detection, it has been found that the vulnerability of these powerful models is equally impressive as their capability in classification tasks. Specifically, research on the adversarial example problem exposes that DNNs, albeit powerful when confronted with legitimate samples, suffer severely from adversarial examples. These synthetic examples can be created by slightly modifying legitimate samples. We speculate that this vulnerability may significantly impede an extensive adoption of DNNs in safety-critical domains. This thesis aims to comprehend some of the mysteries of this vulnerability of DNN, design generic frameworks and deployable algorithms to protect DNNs with different architectures from attacks armed with adversarial examples. We first conduct a thorough exploration of existing research on explaining the pervasiveness of adversarial examples. We unify the hypotheses raised in existing work by extracting three major influencing factors, i.e., data, model, and training. These factors are also helpful in locating different attack and defense methods proposed in the research spectrum and analyzing their effectiveness and limitations. Then we perform two threads of research on neural networks with feed-forward and recurrent architectures, respectively. In the first thread, we focus on the adversarial robustness of feed-forward neural networks, which have been widely applied to process images. Under our proposed generic framework, we design two types of adversary resistant feed-forward networks that weaken the destructive power of adversarial examples and even prevent their creation. We theoretically validate the effectiveness of our methods and empirically demonstrate that they significantly boost a DNN's adversarial robustness while maintaining high accuracy in classification. Our second thread of study focuses on the adversarial robustness of the recurrent neural network (RNN), which represents a variety of networks typically used for processing sequential data. We develop an evaluation framework and propose to quantitatively evaluate RNN's adversarial robustness with deterministic finite automata (DFA), which represent rigorous rules and can be extracted from RNNs, and a distance metric suitable for strings. We demonstrate the feasibility of using extracted DFA as rules through conducting careful experimental studies to identify key conditions that affect the extraction performance. Moreover, we theoretically establish the correspondence between different RNNs and different DFA, and empirically validate the correspondence by evaluating and comparing different RNNs for their extraction performance. At last, we develop an algorithm under our framework and conduct a case study to evaluate the adversarial robustness of different RNNs on a set of regular grammars"--


Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks Related Books

Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks
Language: en
Pages:
Authors: Qinglong Wang
Categories:
Type: BOOK - Published: 2020 - Publisher:

DOWNLOAD EBOOK

"Recent years witnessed the successful resurgence of neural networks through the lens of deep learning research. As the spread of deep neural network (DNN) cont
Strengthening Deep Neural Networks
Language: en
Pages: 246
Authors: Katy Warr
Categories: Computers
Type: BOOK - Published: 2019-07-03 - Publisher: "O'Reilly Media, Inc."

DOWNLOAD EBOOK

As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick
On the Robustness of Neural Network: Attacks and Defenses
Language: en
Pages: 158
Authors: Minhao Cheng
Categories:
Type: BOOK - Published: 2021 - Publisher:

DOWNLOAD EBOOK

Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples. That is
Towards Adversarial Robustness of Deep Neural Networks
Language: en
Pages: 0
Authors: Puyudi Yang
Categories:
Type: BOOK - Published: 2020 - Publisher:

DOWNLOAD EBOOK

Robustness to adversarial perturbation has become an extremely important criterion for applications of deep neural networks in many security-sensitive domains s
Adversarial Robustness of Deep Learning Models
Language: en
Pages: 80
Authors: Samarth Gupta (S.M.)
Categories:
Type: BOOK - Published: 2020 - Publisher:

DOWNLOAD EBOOK

Efficient operation and control of modern day urban systems such as transportation networks is now more important than ever due to huge societal benefits. Low c