Earlier this summer, Google unleashed a network of artificial neurons it's let crawl the Internet ever since. More disconcerting, the robot that was trained to identify objects and people in image files has developed a tendency to sort of let its mind wander.
The result has been dubbed the “Deep Dream” system.
Explained by Google: “We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.”
Translated by Gizmodo: “When you look for shapes in the clouds, you’ll often find things you see every day: Dogs, people, cars. It turns out that artificial ‘brains’ do the same thing. Google calls this phenomenon ‘Inceptionism,’ and it’s a shocking look into how advanced artificial neural networks really are.”
Getting to the point: I thought it would be fun if we directed the attention of Google’s neural network to the photographs that Rebecca Blissett took up at Pemberton over the weekend and ask the creepy machine what it thinks.
So here you go: the 2015 Pemberton Music Festival through the eyes of Google’s Deep Dream system. Enjoy the trip.