{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CAAM 519: Computational Science I\n", "## Homework #4: Tensorflow\n", "### Due date: Monday, 12/16 at 11am\n", "\n", "Your fourth homework will require you to use TensorFlow to train a predictive model on a new data set, \"fashion MNIST.\" You will be asked to implement \"adversarial training\", which will require that you customize the TensorFlow training loop. Just fill in the missing code below!\n", "\n", "To submit this assignment, please create a directory named ``homework-4`` on your git repository, commit this (completed) notebook in that directory under the name ``adversarial.ipynb``, and git tag the final submission to the repository with the tag name ``homework-4``.\n", "\n", "First, we load in the necessary libraries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from __future__ import absolute_import, division, print_function, unicode_literals\n", "import tensorflow as tf\n", "tf.keras.backend.set_floatx('float64')\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we load the \"Fashion MNIST\" data set. This is like the MNIST handwritten digit data set (i.e. it consists of 28x28 grayscale images), but the pictures are instead of clothing items. This makes the classification task a bit more difficult." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fashion_mnist = tf.keras.datasets.fashion_mnist\n", "\n", "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\n", "\n", "train_images = train_images / 255.0\n", "test_images = test_images / 255.0\n", "\n", "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n", " 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Task 1__: In the following cell, specify and train a simple neural network model to classify images from the Fashion MNIST data set. Be sure to use the training data (not test data) for the training. The model should \n", "1. flatten its inputs, then \n", "2. apply a Dense layer with the ReLU activation function, then finally \n", "3. apply a Dense layer with 10 outputs and the softmax activation function. \n", "\n", "Choose the network size and training features such that you attain >85% test accuracy on your final trained model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inplement your model here! Make sure it is named \"simple_model\"\n", "# simple_model = ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_loss, test_acc = simple_model.evaluate(test_images, test_labels, verbose=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following function will plot a handful of images, along with their predicted classification (and whether that it was right or wrong). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_images(X, y, yp, M, N):\n", " f, ax = plt.subplots(M, N, sharex=True, sharey=True, figsize=(2.0 * N, 2.0 * M))\n", " for i in range(M):\n", " for j in range(N):\n", " ax[i][j].imshow(X[i*N+j], cmap=plt.cm.binary)\n", " title = ax[i][j].set_title(\"Pred: {}\".format(class_names[yp[i*N+j].argmax()]))\n", " plt.setp(title, color=('g' if yp[i*N+j].argmax() == y[i*N+j] else 'r'))\n", " ax[i][j].set_axis_off()\n", " plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the following to illustrate some of the Fashion MNIST test images, along with which classes your model believes each belongs to." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_images(test_images, test_labels, simple_model.predict(test_images), 3, 6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Task 2__: Implement the _fast gradient sign method_ to adversarially attack your network. Mathematically, this attack is of the form\n", "$$\n", " \\tilde{x} = x + \\epsilon * \\operatorname{sign}(\\nabla_x \\ell(\\theta; x, y)),\n", "$$\n", "where \n", "* $x$ is an input image, \n", "* $y$ is its true label, \n", "* $\\ell$ is the loss function with respect to current model parameters $\\theta$ on the input/output pair $(x,y)$,\n", "* $\\epsilon > 0$ is some perturbation amount, and \n", "* $\\operatorname{sign}$ returns $+1$ if its input is positive, and $-1$ otherwise.\n", "\n", "Essentially, you compute the gradient of the loss function with respect to $x$ at the point $(x,y)$, and then throw away all information about the magnitude of the gradient, only recording its sign.\n", "\n", "You will implement a method, ``fgsm``, which takes four arguments:\n", "* ``model``: Your MyModel\n", "* ``input_image``: A $N \\times 28 \\times 28$ ``tf.Tensor`` containing $N$ input images of size $28 \\times 28$\n", "* ``input_label``: A $N \\times 10$ ``tf.Tensor`` containing $N$ labels in \"one hot encoding\". In row $i$, the $j$-th entry will contain a 1 if image $i$ is of class $j$, and a 0 otherwise.\n", "* ``epsilon``: A positive floating point value.\n", "\n", "The function should return a $N \\times 28 \\times 28$ ``tf.Tensor``. For each row $i$, presume that in the equation above $x$ is the $i$-th row of ``input_image``. Then the $i$-th row of the returned tensor should contain $\\tilde{x}$ as defined above, _with the further constraint that you should clip the values of each entry of the perturbed image $\\tilde{x}$ to lie in [0,1]_.\n", "\n", "Hints: the way to compute gradients will be nearly identical to how it is done in ``train_step``. After that, you can use the ``tf.sign`` and ``tf.clip_by_value`` functions to finish the job." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def fgsm(model, input_image, input_label, epsilon):\n", " # Your implementation here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can test your implementation visually using the following code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tensor_images = tf.convert_to_tensor(test_images)\n", "tensor_one_hot_labels = tf.convert_to_tensor(tf.one_hot(test_labels, 10))\n", "perturbed_images = fgsm(simple_model, tensor_images, tensor_one_hot_labels, 0.1)\n", "\n", "plot_images(perturbed_images, test_labels, simple_model.predict(perturbed_images), 3, 6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The images should qualitatively similar to the ones pictured above without the attack, but with gray splotches in the white background, seemingly at random. Despite their seemingly random nature, the attacks are successful! Most of the images should now be misclassified by your model.\n", "\n", "However, all is not lost. We can use this adversary to make our model more robust by \"playing against it\" during the training algorithm. This is known as adversarial training. In standard training, you might do something like this:\n", "\n", "```\n", "initialize model parameters theta\n", "for each epoch:\n", " for each minibatch of data B:\n", " theta = Update(theta, B)\n", "```\n", "Here, an epoch is an entire pass through the data. In each epoch, you split up the data set into minibatches, choosing a small subset of the data each time. Using that small subset, you update your model parameters through the ``Update`` method (i.e. the Adam updating rule).\n", "\n", "__Task 3__: Implement adversarial training. Before each update to the model parameters, adversarial training applies an attack to the input data, perturbing it in the worst possible way (according to the current parameters of the model). The training ``Update`` step is then taken with respect to these adversarial input images:\n", "```\n", "initialize model parameters theta\n", "for each epoch:\n", " for each minibatch of data B:\n", " perturbed_B = attack(B)\n", " theta = Update(theta, perturbed_B)\n", "```\n", "In your code, the ``attack`` method should be the ``fgsm`` method, implemented above, where the inputs correspond to each training image/label pair in the minibatch.\n", "\n", "Below, I have provided you with the code from class that defines a custom Model with custom training behavior. All you have to do is modify the training loop to do support adversarial training.\n", "\n", "Here is the set up code. Don't modify this cell!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MyModel(tf.keras.Model):\n", " def __init__(self):\n", " super(MyModel, self).__init__()\n", " self.flatten = tf.keras.layers.Flatten()\n", " self.d1 = tf.keras.layers.Dense(128, activation='relu')\n", " self.d2 = tf.keras.layers.Dense(10, activation='softmax')\n", " \n", " def call(self, x):\n", " x = self.flatten(x)\n", " x = self.d1(x)\n", " return self.d2(x)\n", " \n", "adv_model = MyModel()\n", "\n", "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n", "optimizer = tf.keras.optimizers.Adam()\n", "\n", "train_loss = tf.keras.metrics.Mean(name='train_loss')\n", "train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')\n", "\n", "test_loss = tf.keras.metrics.Mean(name='test_loss')\n", "test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')\n", "\n", "@tf.function\n", "def train_step(images, labels):\n", " with tf.GradientTape() as tape:\n", " predictions = adv_model(images)\n", " loss = loss_object(labels, predictions)\n", " gradients = tape.gradient(loss, adv_model.trainable_variables)\n", " optimizer.apply_gradients(zip(gradients, adv_model.trainable_variables))\n", " \n", " train_loss(loss)\n", " train_accuracy(labels, predictions)\n", " \n", "@tf.function\n", "def test_step(images, labels):\n", " predictions = adv_model(images)\n", " loss = loss_object(labels, predictions)\n", " \n", " test_loss(loss)\n", " test_accuracy(labels, predictions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the code to run training. Modify this cell to do adversarial training! Note that you will have to convert the labels (which are integers from 0 to 9) to a \"one hot encoding\" of this same data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mini-batched train and test data\n", "train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(10000).batch(32)\n", "test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(32)\n", "\n", "# Edit the training loop to do adversarial training\n", "for epoch in range(5):\n", " for images, labels in train_ds:\n", " train_step(images, labels)\n", " \n", " for images, labels in test_ds:\n", " test_step(images, labels)\n", " \n", " template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'\n", " print(template.format(epoch + 1,\n", " train_loss.result(),\n", " train_accuracy.result(),\n", " test_loss.result(),\n", " test_accuracy.result()))\n", " \n", " train_loss.reset_states()\n", " train_accuracy.reset_states()\n", " test_loss.reset_states()\n", " test_accuracy.reset_states()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The test accuracy should be lower, around 82%. However, the adversarial accuracy is much better, as we can see pictorally. First, we show the predictions for the first few images from the test set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_images(test_images, test_labels, adv_model.call(test_images).numpy(), 3, 6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we check the performance on the adversarial attacks on these images. We should see that the network makes much better predictions than before, guessing correctly on roughly 3/4 of the images:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_images(perturbed_images, test_labels, adv_model.call(perturbed_images).numpy(), 3, 6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Extra Credit (10% of assignment)__: Modify the above training code so that it logs the _adversarial accuracy_. That is, at each epoch you adversarially attack each test image, and report the current model's accuracy on these attacked images, along with the train/test loss/accuracy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Extra Credit (20% of assignment)__: Implement another adversarial attack! Here's one idea: take (multiple) steps in the direction of the gradient (i.e. run gradient descent for a few iterations). For credit, the adversary should be \"successful\", and able to fool the ``simple_model`` on over half of the test instances." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.15+" } }, "nbformat": 4, "nbformat_minor": 2 }