Merge pull request #451 from anguyen8:master
commit
ac1e608b82
14 changed files with 494 additions and 0 deletions
@ -0,0 +1,29 @@ |
|||||||
|
# Compiled Object files |
||||||
|
*.slo |
||||||
|
*.lo |
||||||
|
*.o |
||||||
|
*.obj |
||||||
|
|
||||||
|
# Precompiled Headers |
||||||
|
*.gch |
||||||
|
*.pch |
||||||
|
|
||||||
|
# Compiled Dynamic libraries |
||||||
|
*.so |
||||||
|
*.dylib |
||||||
|
*.dll |
||||||
|
*.pyc |
||||||
|
|
||||||
|
# Fortran module files |
||||||
|
*.mod |
||||||
|
|
||||||
|
# Compiled Static libraries |
||||||
|
*.lai |
||||||
|
*.la |
||||||
|
*.a |
||||||
|
*.lib |
||||||
|
|
||||||
|
# Executables |
||||||
|
*.exe |
||||||
|
*.out |
||||||
|
*.app |
Binary file not shown.
@ -0,0 +1,59 @@ |
|||||||
|
# Fooling Code |
||||||
|
This is the code base used to reproduce the "fooling" images in the paper: |
||||||
|
[Nguyen A](http://anhnguyen.me), [Yosinski J](http://yosinski.com/), [Clune J](http://jeffclune.com). ["Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images"](http://arxiv.org/abs/1412.1897). In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015. |
||||||
|
|
||||||
|
**If you use this software in an academic article, please cite:** |
||||||
|
|
||||||
|
@inproceedings{nguyen2015deep, |
||||||
|
title={Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images}, |
||||||
|
author={Nguyen, Anh and Yosinski, Jason and Clune, Jeff}, |
||||||
|
booktitle={Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on}, |
||||||
|
year={2015}, |
||||||
|
organization={IEEE} |
||||||
|
} |
||||||
|
|
||||||
|
For more information regarding the paper, please visit www.evolvingai.org/fooling |
||||||
|
|
||||||
|
## Requirements |
||||||
|
This is an installation process that requires two main software packages (included in this package): |
||||||
|
|
||||||
|
1. Caffe: http://caffe.berkeleyvision.org |
||||||
|
* Our libraries installed to work with Caffe |
||||||
|
* Cuda 6.0 |
||||||
|
* Boost 1.52 |
||||||
|
* g++ 4.6 |
||||||
|
* Use the provided scripts to download the correct version of Caffe for your experiments. |
||||||
|
* `./download_caffe_evolutionary_algorithm.sh` Caffe version for EA experiments |
||||||
|
* `./download_caffe_gradient_ascent.sh` Caffe version for gradient ascent experiments |
||||||
|
2. Sferes: https://github.com/jbmouret/sferes2 |
||||||
|
* Our libraries installed to work with Sferes |
||||||
|
* OpenCV 2.4.10 |
||||||
|
* Boost 1.52 |
||||||
|
* g++ 4.9 (a C++ compiler compatible with C++11 standard) |
||||||
|
* Use the provided script `./download_sferes.sh` to download the correct version of Sferes. |
||||||
|
|
||||||
|
Note: These are patched versions of the two frameworks with our additional work necessary to produce the images as in the paper. They are not the same as their master branches. |
||||||
|
|
||||||
|
## Installation |
||||||
|
Please see the [Installation_Guide](https://github.com/anguyen8/opencv_contrib/blob/master/modules/dnns_easily_fooled/Installation_Guide.pdf) for more details. |
||||||
|
|
||||||
|
## Usage |
||||||
|
* An MNIST experiment (Fig. 4, 5 in the paper) can be run directly on a local machine (4-core) within a reasonable amount of time (around ~5 minutes or less for 200 generations). |
||||||
|
* An ImageNet experiment needs to be run on a cluster environment. It took us ~4 days x 128 cores to run 5000 generations and produce 1000 images (Fig. 8 in the paper). |
||||||
|
* [How to configure an experiment to test the evolutionary framework quickly](https://github.com/Evolving-AI-Lab/fooling/wiki/How-to-test-the-evolutionary-framework-quickly) |
||||||
|
* To reproduce the gradient ascent fooling images (Figures 13, S3, S4, S5, S6, and S7 from the paper), see the [documentation in the caffe/ascent directory](https://github.com/anguyen8/opencv_contrib/tree/master/modules/dnns_easily_fooled/caffe/ascent). You'll need to download the correct Caffe version for this experiment using `./download_caffe_gradient_ascent.sh` script. |
||||||
|
|
||||||
|
## Troubleshooting |
||||||
|
1. If Sferes (Waf) can't find your CUDA and Caffe dynamic libraries |
||||||
|
> Add obj.libpath to the wscript for exp/images to find libcudart and libcaffe or you can use LD_LIBRARY_PATH (for Linux). |
||||||
|
|
||||||
|
2. Is there a way to monitor the progress of the experiments? |
||||||
|
> There is a flag for printing out results (fitness + images) every N generations. |
||||||
|
You can adjust the dump_period setting [here](https://github.com/Evolving-AI-Lab/fooling/blob/master/sferes/exp/images/dl_map_elites_images.cpp#L159). |
||||||
|
|
||||||
|
3. Where do I get the pre-trained Caffe models? |
||||||
|
> For AlexNet, please download on Caffe's Model Zoo. |
||||||
|
> For LeNet, you can grab it [here](https://github.com/anguyen8/opencv_contrib/tree/master/modules/dnns_easily_fooled/model/lenet). |
||||||
|
|
||||||
|
4. How do I run the experiments on my local machine without MPI? |
||||||
|
> You can enable MPI or non-MPI mode by commenting/uncommenting a line [here](https://github.com/Evolving-AI-Lab/fooling/blob/master/sferes/exp/images/dl_map_elites_images_mnist.cpp#L190-L191). It can be simple eval::Eval (single-core), eval::Mpi (distributed for clusters). |
@ -0,0 +1,19 @@ |
|||||||
|
#!/bin/bash |
||||||
|
|
||||||
|
if [ -d "./caffe" ]; then |
||||||
|
echo "Please remove the existing [caffe] folder and re-run this script." |
||||||
|
exit 1 |
||||||
|
fi |
||||||
|
|
||||||
|
# Download the version of Caffe that can be used for generating fooling images via EAs. |
||||||
|
echo "Downloading Caffe ..." |
||||||
|
wget https://github.com/Evolving-AI-Lab/fooling/archive/master.zip |
||||||
|
|
||||||
|
echo "Extracting into ./caffe" |
||||||
|
unzip master.zip |
||||||
|
mv ./fooling-master/caffe ./ |
||||||
|
|
||||||
|
# Clean up |
||||||
|
rm -rf fooling-master master.zip |
||||||
|
|
||||||
|
echo "Done." |
@ -0,0 +1,19 @@ |
|||||||
|
#!/bin/bash |
||||||
|
|
||||||
|
if [ -d "./caffe" ]; then |
||||||
|
echo "Please remove the existing [caffe] folder and re-run this script." |
||||||
|
exit 1 |
||||||
|
fi |
||||||
|
|
||||||
|
# Download the version of Caffe that can be used for generating fooling images via EAs. |
||||||
|
echo "Downloading Caffe ..." |
||||||
|
wget https://github.com/Evolving-AI-Lab/fooling/archive/ascent.zip |
||||||
|
|
||||||
|
echo "Extracting into ./caffe" |
||||||
|
unzip ascent.zip |
||||||
|
mv ./fooling-ascent/caffe ./ |
||||||
|
|
||||||
|
# Clean up |
||||||
|
rm -rf fooling-ascent ascent.zip |
||||||
|
|
||||||
|
echo "Done." |
@ -0,0 +1,20 @@ |
|||||||
|
#!/bin/bash |
||||||
|
path="./sferes" |
||||||
|
|
||||||
|
if [ -d "${path}" ]; then |
||||||
|
echo "Please remove the existing [${path}] folder and re-run this script." |
||||||
|
exit 1 |
||||||
|
fi |
||||||
|
|
||||||
|
# Download the version of Sferes that can be used for generating fooling images via EAs. |
||||||
|
echo "Downloading Sferes ..." |
||||||
|
wget https://github.com/Evolving-AI-Lab/fooling/archive/master.zip |
||||||
|
|
||||||
|
echo "Extracting into ${path}" |
||||||
|
unzip master.zip |
||||||
|
mv ./fooling-master/sferes ./ |
||||||
|
|
||||||
|
# Clean up |
||||||
|
rm -rf fooling-master master.zip |
||||||
|
|
||||||
|
echo "Done." |
After Width: | Height: | Size: 109 KiB |
@ -0,0 +1 @@ |
|||||||
|
/home/anh/workspace/sferes/exp/images/imagenet/hen_256.png 1 |
@ -0,0 +1,223 @@ |
|||||||
|
name: "CaffeNet" |
||||||
|
layers { |
||||||
|
name: "data" |
||||||
|
type: IMAGE_DATA |
||||||
|
top: "data" |
||||||
|
top: "label" |
||||||
|
image_data_param { |
||||||
|
source: "/home/anh/workspace/sferes/exp/images/imagenet/image_list.txt" |
||||||
|
mean_file: "/home/anh/src/caffe/data/ilsvrc12/imagenet_mean.binaryproto" |
||||||
|
batch_size: 10 |
||||||
|
crop_size: 227 |
||||||
|
mirror: false |
||||||
|
new_height: 256 |
||||||
|
new_width: 256 |
||||||
|
images_in_color: true |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv1" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "data" |
||||||
|
top: "conv1" |
||||||
|
convolution_param { |
||||||
|
num_output: 96 |
||||||
|
kernel_size: 11 |
||||||
|
stride: 4 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu1" |
||||||
|
type: RELU |
||||||
|
bottom: "conv1" |
||||||
|
top: "conv1" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "pool1" |
||||||
|
type: POOLING |
||||||
|
bottom: "conv1" |
||||||
|
top: "pool1" |
||||||
|
pooling_param { |
||||||
|
pool: MAX |
||||||
|
kernel_size: 3 |
||||||
|
stride: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "norm1" |
||||||
|
type: LRN |
||||||
|
bottom: "pool1" |
||||||
|
top: "norm1" |
||||||
|
lrn_param { |
||||||
|
local_size: 5 |
||||||
|
alpha: 0.0001 |
||||||
|
beta: 0.75 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv2" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "norm1" |
||||||
|
top: "conv2" |
||||||
|
convolution_param { |
||||||
|
num_output: 256 |
||||||
|
pad: 2 |
||||||
|
kernel_size: 5 |
||||||
|
group: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu2" |
||||||
|
type: RELU |
||||||
|
bottom: "conv2" |
||||||
|
top: "conv2" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "pool2" |
||||||
|
type: POOLING |
||||||
|
bottom: "conv2" |
||||||
|
top: "pool2" |
||||||
|
pooling_param { |
||||||
|
pool: MAX |
||||||
|
kernel_size: 3 |
||||||
|
stride: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "norm2" |
||||||
|
type: LRN |
||||||
|
bottom: "pool2" |
||||||
|
top: "norm2" |
||||||
|
lrn_param { |
||||||
|
local_size: 5 |
||||||
|
alpha: 0.0001 |
||||||
|
beta: 0.75 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv3" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "norm2" |
||||||
|
top: "conv3" |
||||||
|
convolution_param { |
||||||
|
num_output: 384 |
||||||
|
pad: 1 |
||||||
|
kernel_size: 3 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu3" |
||||||
|
type: RELU |
||||||
|
bottom: "conv3" |
||||||
|
top: "conv3" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv4" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "conv3" |
||||||
|
top: "conv4" |
||||||
|
convolution_param { |
||||||
|
num_output: 384 |
||||||
|
pad: 1 |
||||||
|
kernel_size: 3 |
||||||
|
group: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu4" |
||||||
|
type: RELU |
||||||
|
bottom: "conv4" |
||||||
|
top: "conv4" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv5" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "conv4" |
||||||
|
top: "conv5" |
||||||
|
convolution_param { |
||||||
|
num_output: 256 |
||||||
|
pad: 1 |
||||||
|
kernel_size: 3 |
||||||
|
group: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu5" |
||||||
|
type: RELU |
||||||
|
bottom: "conv5" |
||||||
|
top: "conv5" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "pool5" |
||||||
|
type: POOLING |
||||||
|
bottom: "conv5" |
||||||
|
top: "pool5" |
||||||
|
pooling_param { |
||||||
|
pool: MAX |
||||||
|
kernel_size: 3 |
||||||
|
stride: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "fc6" |
||||||
|
type: INNER_PRODUCT |
||||||
|
bottom: "pool5" |
||||||
|
top: "fc6" |
||||||
|
inner_product_param { |
||||||
|
num_output: 4096 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu6" |
||||||
|
type: RELU |
||||||
|
bottom: "fc6" |
||||||
|
top: "fc6" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "drop6" |
||||||
|
type: DROPOUT |
||||||
|
bottom: "fc6" |
||||||
|
top: "fc6" |
||||||
|
dropout_param { |
||||||
|
dropout_ratio: 0.5 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "fc7" |
||||||
|
type: INNER_PRODUCT |
||||||
|
bottom: "fc6" |
||||||
|
top: "fc7" |
||||||
|
inner_product_param { |
||||||
|
num_output: 4096 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu7" |
||||||
|
type: RELU |
||||||
|
bottom: "fc7" |
||||||
|
top: "fc7" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "drop7" |
||||||
|
type: DROPOUT |
||||||
|
bottom: "fc7" |
||||||
|
top: "fc7" |
||||||
|
dropout_param { |
||||||
|
dropout_ratio: 0.5 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "fc8" |
||||||
|
type: INNER_PRODUCT |
||||||
|
bottom: "fc7" |
||||||
|
top: "fc8" |
||||||
|
inner_product_param { |
||||||
|
num_output: 1000 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "prob" |
||||||
|
type: SOFTMAX |
||||||
|
bottom: "fc8" |
||||||
|
top: "prob" |
||||||
|
} |
@ -0,0 +1,123 @@ |
|||||||
|
name: "LeNet" |
||||||
|
layers { |
||||||
|
name: "data" |
||||||
|
type: IMAGE_DATA |
||||||
|
top: "data" |
||||||
|
top: "label" |
||||||
|
image_data_param { |
||||||
|
source: "/project/EvolvingAI/anguyen8/model/mnist_image_list.txt" |
||||||
|
mean_file: "/project/EvolvingAI/anguyen8/model/mnist_mean.binaryproto" |
||||||
|
batch_size: 1 |
||||||
|
mirror: false |
||||||
|
new_height: 28 |
||||||
|
new_width: 28 |
||||||
|
scale: 0.00390625 |
||||||
|
images_in_color: false |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv1" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "data" |
||||||
|
top: "conv1" |
||||||
|
blobs_lr: 1 |
||||||
|
blobs_lr: 2 |
||||||
|
convolution_param { |
||||||
|
num_output: 20 |
||||||
|
kernel_size: 5 |
||||||
|
stride: 1 |
||||||
|
weight_filler { |
||||||
|
type: "xavier" |
||||||
|
} |
||||||
|
bias_filler { |
||||||
|
type: "constant" |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "pool1" |
||||||
|
type: POOLING |
||||||
|
bottom: "conv1" |
||||||
|
top: "pool1" |
||||||
|
pooling_param { |
||||||
|
pool: MAX |
||||||
|
kernel_size: 2 |
||||||
|
stride: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "conv2" |
||||||
|
type: CONVOLUTION |
||||||
|
bottom: "pool1" |
||||||
|
top: "conv2" |
||||||
|
blobs_lr: 1 |
||||||
|
blobs_lr: 2 |
||||||
|
convolution_param { |
||||||
|
num_output: 50 |
||||||
|
kernel_size: 5 |
||||||
|
stride: 1 |
||||||
|
weight_filler { |
||||||
|
type: "xavier" |
||||||
|
} |
||||||
|
bias_filler { |
||||||
|
type: "constant" |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "pool2" |
||||||
|
type: POOLING |
||||||
|
bottom: "conv2" |
||||||
|
top: "pool2" |
||||||
|
pooling_param { |
||||||
|
pool: MAX |
||||||
|
kernel_size: 2 |
||||||
|
stride: 2 |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "ip1" |
||||||
|
type: INNER_PRODUCT |
||||||
|
bottom: "pool2" |
||||||
|
top: "ip1" |
||||||
|
blobs_lr: 1 |
||||||
|
blobs_lr: 2 |
||||||
|
inner_product_param { |
||||||
|
num_output: 500 |
||||||
|
weight_filler { |
||||||
|
type: "xavier" |
||||||
|
} |
||||||
|
bias_filler { |
||||||
|
type: "constant" |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "relu1" |
||||||
|
type: RELU |
||||||
|
bottom: "ip1" |
||||||
|
top: "ip1" |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "ip2" |
||||||
|
type: INNER_PRODUCT |
||||||
|
bottom: "ip1" |
||||||
|
top: "ip2" |
||||||
|
blobs_lr: 1 |
||||||
|
blobs_lr: 2 |
||||||
|
inner_product_param { |
||||||
|
num_output: 10 |
||||||
|
weight_filler { |
||||||
|
type: "xavier" |
||||||
|
} |
||||||
|
bias_filler { |
||||||
|
type: "constant" |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
layers { |
||||||
|
name: "prob" |
||||||
|
type: SOFTMAX |
||||||
|
bottom: "ip2" |
||||||
|
top: "prob" |
||||||
|
} |
Binary file not shown.
@ -0,0 +1 @@ |
|||||||
|
/project/EvolvingAI/anguyen8/model/mnist_sample_image.png 0 |
Binary file not shown.
After Width: | Height: | Size: 677 B |
Loading…
Reference in new issue