Notes on "Intriguing Properties of Neural Networks", and two other papers (2014)

Three-Toed Sloth 2019-08-20

Summary:

\[ \DeclareMathOperator*{\argmax}{argmax} \]

Attention conservation notice: Slides full of bullet points are never good reading; why would you force yourself to read painfully obsolete slides (including even more painfully dated jokes) about a rapidly moving subject?

These are basically the slides I presented at CMU's Statistical Machine Learning Reading Group on 13 November 2014, on the first paper on what have come to be called "adversarial examples". It includes some notes I made after the group meeting on the Q-and-A, but I may not have properly credited (or understood) everyone's contributions even at the time. It also includes some even rougher notes about two relevant papers that came out the next month. Presented now because I'm procrastinating preparing for my fall class in the interest of the historical record.

Paper I: "Intriguing properties of neural networks" (Szegedy et al.)

Background

  • Nostalgia for the early 1990s: G. Hinton and company are poised to take over the world, NIPS is mad for neural networks, Clinton is running for President...
    • Learning about neural networks for the first time in cog. sci. 1
    • Apocrypha: a neural network supposed to distinguish tanks from trucks in aerial photographs actually learned about parking lots...
  • The models
    • Multilayer perceptron \[ \phi(x) = (\phi_K \circ \phi_{K-1} ... \circ \phi_1)(x) \]
    • This paper not concerned with training protocol, just following what others have done
  • The applications
    • MNIST digit-recognition
    • ImageNet
    • \(10^7\) images from YouTube
    • So we've got autoencoders, we've got convolutinal networks, we've got your favorite architecture and way of training it

Where Are the Semantics?

  • Claim (the literature, passim): individual hidden units in the network encode high-level semantic features
  • Support for the claim: look at the images which maximize the activation of units in some layer \[ \mathcal{X}_i = \argmax_{x}{\langle \phi(x), e_i \rangle} \] then do story-telling about what \(x \in \mathcal{X}_i\) have in common
  • The critique: pick a random unit vector \(v\) and do similar story-telling about \[ \mathcal{X}_{v} = \argmax_{x}{\langle \phi(x), v \rangle} \]

  • Real units
    • Comment on the top left: white flowers?!?

Link:

http://bactra.org/weblog/2014-11-13-intriguing-properties.html

From feeds:

Statistics and Visualization ยป Three-Toed Sloth

Tags: