We investigate the effectiveness of using a ‘sparse’ Boltzmann Machine (SBM) fitted natively to the D-Wave quantum annealer architecture, for image classification, and the benefits in terms of execution time over using a classical annealer. We design a series of SBM networks, and run a series of image classification experiments, measuring both the accuracy of the trained networks, and the training times running both on the D-Wave QPU and simulated on a CPU. We find poor recognition accuracy. This may be due to sparsity, or to using default D-Wave parameter settings. We find that the sampling step is faster on the D-Wave QPU than simulated on a classical CPU, and the benefit increases with network size (larger problems). Overheads, from Internet and queuing latencies and from input bottlenecks, mean that this advantage is not seen on the full problem until an unrealistically high number of reads per anneal. On a dedicated local machine with no queuing, however, this number reduces significantly, such that the QPU is more efficient than the CPU on the full problem.
doi:10.1007/978-3-031-63742-1_4
@inproceedings(Park++:2024-UCNC-DWave, author = "Jessica Park and Nick Chancellor and David Griffin and Viv Kendon and Susan Stepney", title = "Benchmarking the D-Wave quantum annealer as a Sparse Boltzmann Machine: recognition and timing performances", pages = "43-54", doi = "10.1007/978-3-031-63742-1_4", crossref = "UCNC-2024" ) @proceedings(UCNC-2024, title = "UCNC 2024, Pohang, South Korea, June 2024", booktitle = "UCNC 2024, Pohang, South Korea, June 2024", series = "LNCS", volume = 14776, publisher = "Springer", year = 2024 )