Classical machine learning has benefited several physics subfields, from materials science to medical imaging. Implementing machine-learning algorithms on quantum computers could expand their use to more complex problems and to datasets that are inherently quantum. Nayeli Rodríguez-Briones at the Technical University of Vienna and Daniel Park at Yonsei University in South Korea have now proposed a thermodynamics-inspired protocol that could make quantum machine-learning techniques more efficient [1].
In one common classical machine-learning task, a system is trained on a known dataset and then challenged to classify new data. Its output quantifies both the classification and that classification’s uncertainty. Once the system’s parameters are fixed, evaluating the same data yields the same output. In contrast, the output of a quantum machine-learning algorithm is read out as binary measurements of qubits, which are inherently probabilistic. Because a single measurement provides only limited information, the computation must be repeated many times.
Rodríguez-Briones and Park recognized that how clearly a quantum computer reveals its output is determined by entropy. When the readout qubit is highly polarized—strongly favoring one outcome—its entropy is low. Few repetitions are needed to obtain a firm result. An unpolarized, high-entropy readout qubit returns both states more evenly, meaning more repetitions are required. The researchers showed that the readout qubit’s polarity can be increased by transferring its entropy to ancillary qubits, effectively cooling one while warming the others. Between runs, the ancillary qubits are reset by coupling them to a heat bath. Crucially, this entropy transfer affects the readout qubit’s degree of polarization without changing the encoded decision. The upshot: A given result can be arrived at with fewer repetitions.