Cad and Dog 2
Cad and Dog 2
Cad and Dog 2
In the previous lesson you saw how to use a CNN to make your recognition of the handwriting
digits more efficient. In this lesson you'll take that to the next level, recognizing real images
of Cats and Dogs in order to classify an incoming image as one or the other. In particular the
handwriting recognition made your life a little easier by having all the images be the same
size and shape, and they were all monochrome color. Real-world images aren't like that --
they're in different shapes, aspect ratios etc, and they're usually in color!
So, as part of the task you need to process your data -- not least resizing it to be uniform in
shape.
You'll follow these steps:
Let's start by downloading our example data, a .zip of 2,000 JPG pictures of cats and dogs,
and extracting it locally in /tmp.
NOTE: The 2,000 images used in this exercise are excerpted from the "Dogs vs. Cats"
dataset available on Kaggle, which contains 25,000 images. Here, we use a subset of the full
dataset to decrease training time for educational purposes.
The following python code will use the OS library to use Operating System libraries, giving you
access to the file system, and the zipfile library allowing you to unzip the data.
The contents of the .zip are extracted to the base directory /tmp/cats_and_dogs_filtered,
which contains train and validation subdirectories for the training and validation datasets (see
the Machine Learning Crash Course for a refresher on training, validation, and test sets),
which in turn each contain cats and dogs subdirectories.
In short: The training set is the data that is used to tell the neural network model that 'this is
what a cat looks like', 'this is what a dog looks like' etc. The validation data set is images of
cats and dogs that the neural network will not see as part of the training, so you can test how
well or how badly it does in evaluating if an image contains a cat or a dog.
One thing to pay attention to in this sample: We do not explicitly label the images as cats or
dogs. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this
is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded
to read images from subdirectories, and automatically label them from the name of that
subdirectory. So, for example, you will have a 'training' directory containing a 'cats' directory
and a 'dogs' one. ImageGenerator will label the images appropriately for you, reducing a
coding step.
Let's define each of these directories:
Now, let's see what the filenames look like in the cats and dogs train directories (file naming
conventions are the same in the validation directory):
Let's find out the total number of cat and dog images in the train and validation directories:
For both cats and dogs, we have 1,000 training images and 500 validation images.
Now let's take a look at a few pictures to get a better sense of what the cat and dog datasets
look like. First, configure the matplot parameters:
Now, display a batch of 8 cat and 8 dog pictures. You can rerun the cell to see a fresh batch
each time:
It may not be obvious from looking at the images in this grid, but an important note here,
and a significant difference from the previous lesson is that these images come in all
shapes and sizes. When you did the handwriting recognition example, you had 28x28
greyscale images to work with. These are color and in a variety of shapes. Before
training a Neural network with them you'll need to tweak the images. You'll see that in
the next section.
Ok, now that you have an idea for what your data looks like, the next step is to define
the model that will be trained to recognize cats or dogs from these images
We then add a couple of convolutional layers as in the previous example, and flatten the
final result to feed into the densely connected layers.
Next, we'll configure the specifications for model training. We will train our model with
the binary_crossentropy loss, because it's a binary classification problem and our
final activation is a sigmoid. (For a refresher on loss metrics, see the Machine Learning
Crash Course.) We will use the rmsprop optimizer with a learning rate of 0.001. During
training, we will want to monitor classification accuracy.
NOTE: In this case, using the RMSprop optimization algorithm is preferable
to stochastic gradient descent (SGD), because RMSprop automates learning-rate
tuning for us. (Other optimizers, such as Adam and Adagrad, also automatically adapt
the learning rate during training, and would work equally well here.)
Data Preprocessing
Let's set up data generators that will read pictures in our source folders, convert them
to float32 tensors, and feed them (with their labels) to our network. We'll have one
generator for the training images and one for the validation images. Our generators will
yield batches of 20 images of size 150x150 and their labels (binary).
As you may already know, data that goes into neural networks should usually be
normalized in some way to make it more amenable to processing by the network. (It is
uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our
images by normalizing the pixel values to be in the [0, 1] range (originally all values
are in the [0, 255] range).
In Keras this can be done via
the keras.preprocessing.image.ImageDataGenerator class using
the rescale parameter. This ImageDataGenerator class allows you to instantiate
generators of augmented image batches (and their labels) via .flow(data,
labels) or .flow_from_directory(directory). These generators can then be used
with the Keras model methods that accept data generators as
inputs: fit, evaluate_generator, and predict_generator.
Training
Let's train on all 2,000 images available, for 15 epochs, and validate on all 1,000 test
images. (This may take a few minutes to run.)
Do note the values per epoch.
You'll see 4 values per epoch -- Loss, Accuracy, Validation Loss and Validation
Accuracy.
The Loss and Accuracy are a great indication of progress of training. It's making a
guess as to the classification of the training data, and then measuring it against the
known label, calculating the result. Accuracy is the portion of correct guesses. The
Validation accuracy is the measurement with the data that has not been used in
training. As expected this would be a bit lower. You'll learn about why this occurs in the
section on overfitting later in this course.
Running the Model
Let's now take a look at actually running a prediction using the model. This code will
allow you to choose 1 or more files from your file system, it will then upload them, and
run them through the model, giving an indication of whether the object is a dog or a cat.
Visualizing Intermediate Representations
To get a feel for what kind of features our convnet has learned, one fun thing to do is to
visualize how an input gets transformed as it goes through the convnet.
Let's pick a random cat or dog image from the training set, and then generate a figure
where each row is the output of a layer, and each image in the row is a specific filter in
that output feature map. Rerun this cell to generate intermediate representations for a
variety of training images.
As you can see we go from the raw pixels of the images to increasingly abstract and
compact representations. The representations downstream start highlighting what the
network pays attention to, and they show fewer and fewer features being "activated";
most are set to zero. This is called "sparsity." Representation sparsity is a key feature of
deep learning.
These representations carry increasingly less information about the original pixels of the
image, but increasingly refined information about the class of the image. You can think
of a convnet (or a deep network in general) as an information distillation pipeline.
Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free
memory resources: