Notes

Notes is a venue for informal posts about development: code snippets, annotated links, brief comments, and the like.

Experimenting with Google’s deepdream project on OS X

Dreaming new worlds into being
This is what happens when you leave deepdream on overnight.

I have been experimenting with Google’s deepdream project in the last few days. Since I started posting images around (most prominently this collection of dreamscapes) people have been asking: how can I make my own? This short collection of notes represents an early attempt to gather up enough information for technically-minded people on Mac OS X to start dreaming.

Rather than compile from source I used Ryan Kennedy’s Docker container method. If you are new to Docker (like I was) you may find this guide to be helpful, assuming you already use Homebrew. From this we can assemble some rough steps to follow (but be sure to reference the original guides for more details):

  1. Install Homebrew.
  2. Install VirtualBox. You’ll need to download an installer and execute it the old-fashioned way. This is used to run boot2docker, a lightweight Linux distribution specifically designed for running Docker containers.
  3. Install Docker and boot2docker from the command line with: brew update, brew install docker, and brew install boot2docker.
  4. Initialize and load boot2docker with boot2docker init and boot2docker up. I also had to run some command line voodoo with $(boot2docker shellinit) to get things working. If all goes well you’ll get an IP address you can use in the next step.
  5. Set a Docker environment variable with export DOCKER_HOST=tcp://192.168.59.103:2375 (or whatever IP address you saw in the last step).
  6. Fetch the deepdream container with docker pull ryankennedyio/deepdream.
  7. Load the container with docker run -d -p 443:8888 -e "PASSWORD=password" -v /[path]:/src ryankennedyio/deepdream. Be sure to replace [path] with a valid path to a working folder. This will be accessible from within the Docker container (which means you can also store other models there).
  8. Run boot2docker ip and navigate your browser to the IP address it returns (prefaced with https://). Ignore the security warning, enter password, and this should load the Python notebook originally released by Google.

After a system restart you will need to repeat most of the steps from 4 to 8 above (without boot2docker init or step 6).

Actually using the notebook requires some trial and error if you’re not familiar with the conventions (as I wasn’t). Ryan Kennedy’s original post provides some basic tips on navigating the interface. In short, click on code blocks from the very top and hit the play button until you reach the code block that defines the img variable. Here you will want to enter a filename that matches one in the path specified when the Docker container was originally loaded. If you want to skip right to the “inception” style image generation head down to the pair of code blocks that starts with !mkdir frames and execute both in sequence. If everything is wired up correctly your computer will start dreaming.

How about customizing the output? I am no expert in Python or neural nets so I’m doing a lot of guesswork that might be foolish or wrong but I can relate a few things I’ve found.

First of all, to swap in another model you can download one from the model zoo, copy the files to your working folder, and alter the model definitions in the second code block like so:

model_path = '/src/models/googlenet_places205/'
net_fn = model_path + 'deploy_places205.protxt'
param_fn = model_path + 'googlelet_places205_train_iter_2400000.caffemodel'

These are the settings for Places205-GoogLeNet, another neural network used in the original post by Google. As the name would imply this one has been trained to recognize a dizzying variety of different places. In my admittedly limited experience thus far I’ve mainly seen regular features of manmade landscapes—temples, windmills, castles, and so on—as well as a variety of other scenes like baskets of fruit, piles of clothing, grocery store shelves, halls filled with chairs, and the like.

Whatever model you use you’ll want to execute net.blobs.keys() to see what layers are available. These layers are then targeted by specifying the end variable in the make_step and deepdream functions defined under the “Producing dreams” heading. Here you may also wish to play with step_size, jitter, iter_n, octave_n, and octave_scale. Here I should note that I haven’t had any good results from setting both end parameters to different layers. If you’re looking to dream coherent objects then you may wish to set them to the same layer.

How about the layers themselves, what do they produce? The emerging community of r/deepdream is a good place to look for stuff like this. Here is one test suite I found useful. In short, the layers of the default neural net optimize for slightly different dreams. In my experience I noted the following (and I will expand and correct these notes as I continue exploring):

  • inception_3a: mostly geometric patterns
  • inception_3b: still just patterns
  • inception_4a: eyes and some dogs
  • inception_4b: lots of dog faces
  • inception_4c: starting to get more into cars and buildings
  • inception_4d: more of a menagerie
  • inception_4e: lots of different animals; birds, snakes, monkeys, and so on

Of course, your results will be highly dependent on your source images (assuming you get outside of the “dog-slug” default layers). One run produced boats on an otherwise empty ocean stretching to the horizon which was quite cool.

Another note: the scale coefficient s in the block of code that actually generates images can be adjusted to reduce the amount of cropping that occurs with every step of the feedback loop. The scaling is necessary to provide the neural net with slightly new information on each pass. If you’re using your own images you may want to allow for some trimming depending on how strong of an effect you are looking for.

If you would rather setup your own web service to drop images into try clouddream. I’m still working from the Python notebook for the time being it offers way more customization options.

Finally, if you’d like to check out some of my own work you can find a few early experiments on my main blog.

More updates to come!

A few notes about Medium’s typographic manifesto

Marcin Wichary has a fantastic series of posts documenting Medium’s commitment to beautiful web typography. Apart from being intrigued about some of the little tricks they use to enforce better typing habits, like disallowing users from entering two spaces in a row, I was also interested to read the technical supplement for designers and developers. It reminded me of a few things I’ve been meaning to add to my main theme and introduced several things I hadn’t really thought about.

Continue reading →

Removing the taxonomy base from WordPress permalinks

A comment archived from Twitter:

Another WordPress thing that should be easy but isn’t: removing the taxonomy base from the URL. ‘Use a random old plugin’ is not a great option.

Responsible use of WordPress plugins requires code evaluation. Sometimes this is easy and straight-forward but at other times one must stop and ask about the benefit derived versus time invested. For this little feature I decided I had better come back to it later. Some promising leads:

Using Git with Dropbox

I use GitHub for source control but don’t always want to publicize everything I am meddling with. GitHub offers private repo plans but since I’m usually not working with anyone else the economics of it don’t make sense for me.

Thankfully, it’s not hard to use Git and Dropbox to host private repos. This tutorial is short and sweet and worked beautifully for me. If you prefer something with a bit more explanation this post might suffice, or perhaps you’d like to check out Stack Overflow. At any rate, it’s not hard to get up and running—and now I have a good excuse to learn more command line Git!

A quick fix for a glitchy WordPress admin panel

In the last few weeks I’ve experienced glitches in n the WordPress admin panel. Symptoms: post contents fail to load in the editor, toolbar and slug aren’t visible, taxonomies refuse to update, media library freezes on image upload, and so on. Very annoying, especially as it only happened some of the time.

I traced this back to script loading, specifically jQuery. The fix? Easy. Just add this to wp-config.php:

define( 'CONCATENATE_SCRIPTS', false );

What this usually does (when set to true) is stitch all the back-end JavaScript files together into one. Disable this feature and you may experience slightly longer load times. Personally, I don’t notice a thing, perhaps because the admin panel is a bit clunky anyway!

Of course, I’d like to know what is really going on here, but since I have a hard time isolating the issue (sometimes it happens, sometimes not), I’m just going to go with this quick and dirty fix. I have also updated my WordPress configuration file boilerplate to include this setting.

Transparent encryption of sensitive data in Git repositories

Looking for a way to easily encrypt and store configuration files and other sensitive information in your Git repositories? Here are your options:

  • git-crypt: written in C++, easily installed on OS X via homebrew: brew install git-crypt. I followed this tutorial to get up and running in no time.
  • git-encrypt: node-based, available through npm: npm install -g git-encrypt. I was dissuaded by the more complex configuration and haven’t personally tested this one.
  • transcrypt: a Bash script you can clone from GitHub. I also got this up and running with ease.

It is worth noting that none of these tools should be used to encrypt entire repositories. Additionally, local copies will not be encrypted, so you still have to secure your sensitive data.

Unfortunately none of these solutions work with the GitHub client for Mac OS X, which is what I use for most projects. Maybe it’s time to get up to speed with command line Git-fu.