D-Wave Qubits North America Conference

RBM Image Generation Using the D-Wave 2000Q

, , and

Currently implementing machine learning using the D-Wave system for image processing is challenged by the limited number of qubits available. A commonly used neural network method on the D-Wave is the Restricted Boltzmann Machine (RBM) as it uses an energy-based model that is consistent with the D-Wave, whereby the D-Wave acts as a physical Boltzmann machine. However, there has been limited practical application of RBMs for use in image generation due to the limited number of qubits and due to the binary output returned from the D-Wave. Deep autoencoders are trained to take image inputs, compress the input using a bottleneck in the network, and take the compressed data back to the original input size. We show how to use a stacked autoencoder to compress MNIST digits from a size of 28 x 28 to a size of 7 x 7 and to translate from the binary representation back to a grayscale representation consistent with the original 28 x 28 MNIST digits. We first train the stacked autoencoder to compress and translate MNIST digits into 7 x 7 binary digits. We train our RBM on the D-Wave using the compressed, normalized weights from the encoder. We then sample from the D-Wave and show how these samples when decoded with our stacked autoencoder recover MNIST digits to a size of 28 x 28.

  • 2559383 bytes


Downloads: 536 downloads

UMBC ebiquity