17th Granada Seminar

Machine Learning and Physics: Quantum, Classical and Applications

SPEAKERS

Ludovic Berthier

Université de Montpellier

Wednesday 13, 9:00-9:50

Resume:

Ludovic Berthier received his Ph.D. in Theoretical Physics in 2001 at the Ecole Normale Supérieure in Lyon, France. He was a Marie Curie Postdoctoral Fellow at the Department of Theoretical Physics at Oxford University until 2003. In 2004 he was appointed as a CNRS researcher at the Laboratoire Charles Coulomb at University of Montpellier (France), where he is now Director of Research. In 2007, he was a visiting scientist at the James Franck Institute of the University of Chicago, US. He works on the statistical mechanics of disordered materials, nonequilibrium systems, and soft matter. He performs theoretical research and computer simulations to develop a fundamental understanding of the structure and dynamics of a broad range of materials that we use on a daily basis, from sandpiles, emulsions, pastes and to window glasses and simple molecular fluids.

Machine learning glasses

It remains a difficult task to predict from first principles the physical properties of disordered materials that are formed out of equilibrium, because the lack of an apparent order and the loss of ergodicity severely limit the use of statistical mechanics and computational tools. As in many fields, various machine learning strategies are being developed to attack these open physics problems with new forces. In this talk, I will explain where and how machine learning techniques can help answer some of these important questions to understand the physics of the glass transition and the properties of amorphous solids.

Juan P. Garrahan

University of Nottingham

Wednesday 13, 11:10-12:00

Resume:

Juan P. Garrahan has held a Chair in Physics at the University of Nottingham since 2007. His research interests are in the theory of classical and quantum non-equilibrium systems, and on the connection between machine learning and statistical physics. He obtained his PhD from the University of Buenos Aires, and in the past he was a Glasstone Fellow at Oxford, an EPSRC Advanced Fellow, a visiting professor at UC Berkeley, and a Visiting Fellow at All Souls College, Oxford. At Nottingham he is the former director of the Centre for Quantum Non-Equilibrium Systems (CQNE), and currently directs the Machine Learning in Science (MLiS) Initiative for the Faculty of Science.

Trajectory ensembles and Machine Learning

Gian Giacomo Guerreschi

Intel

Thursday 14, 11:10-12:00

Resume:

Gian Giacomo Guerreschi is a research scientist at Intel Labs where he focuses on theoretical and numerical aspects of Quantum Computing. Gian Giacomo received his Ph.D. degree in Theoretical Physics from the University of Innsbruck (Austria) and the Institute for Quantum Information and Quantum Optics (IQOQI). Before joining Intel he was a post-doctoral researcher at Harvard University where he investigated restricted models of quantum computation: adiabatic quantum optimization and boson sampling.

Gian Giacomo’s research interests include: large scale simulations of quantum algorithms, compilers for quantum circuits, the extension of neural networks to the quantum regime. His research has been presented in conferences and featured in journals like Nature Photonics, Physical Review Letters and Quantum Science and Technology, among others.

 

Realization of quantum neural networks using repeat-until-success circuits

In the first part, we report the experimental realization of a Quantum Neural Network (QNN) using Repeat-Until-Success (RUS) circuits in a superconducting quantum processor. The realized QNN is the minimal network able to learn all 2-input-to-1-output Boolean functions. The non-linear activation function for the neuron update is obtained using RUS techniques requiring active feedback. The experiments have been performed in DiCarlo’s lab at QuTech.

In the second part, we present the Intel® Quantum SDK, a high-level programming environment designed to integrate all modules required by a quantum computing system. It includes: 1) an intuitive user interface based on the C++ programming language, 2) a sequence of decompositions and optimizations based on the LLVM compiler framework, 3) a full compilation flow that produces an executable using the user’s choice of backends compatible with Intel quantum hardware. We focus on the features required to compile and run the hybrid quantum-classical programs corresponding to the QNNs discussed in the first part. Interested users can sign up for free to Intel® Developer Cloud.

Hilbert J. Kappen

Radbout Universiteit (The Neederlands)

Tuesday 12, 15:30-16:20

Resume:

Bert Kappen’s research focuses on neural networks and machine learning, and the development of self-learning intelligent systems, which, among other things, can assist in locating missing persons. He also explores how quantum technology can be used for machine learning.

Why adiabatic quantum annealing is unlikely to yield speed-up
LyX Document
We study quantum annealing for combinatorial optimization with Hamiltonian H=z H f + H 0 where H f is diagonal, H 0 =| ϕ ϕ | is the equal superposition state projector and z the annealing parameter. We analytically compute the minimal spectral gap as O ( 1 N ) with N the total number of states and its location z . We show that quantum speed-up requires an annealing schedule which demands a precise knowledge of z , which can be computed only if the density of states of the optimization problem is known. However, in general the density of states is intractable to compute, making quadratic speed-up unfeasible for any practical combinatoric optimization problems. We conjecture that it is likely that this negative result also applies for any other instance independent transverse Hamiltonians such as H 0 = i=1 n σ i x .

Lucas Lamata

Universidad de Sevilla

Friday 15, 11:10-12:00

Resume:

I am an Associate Professor (Profesor Titular de Universidad) at Universidad de Sevilla, Spain (3 sexenios, 1 quinquenio, 4 trienios). My research is focused on quantum optics and quantum information, including proposals for quantum simulations with trapped ions and superconducting circuits. I am also interested in quantum artificial intelligence and quantum machine learning, as well as in the emulation of biological behaviours with quantum controllable systems. I enjoy working with experimentalists, and have made proposals and participated in 16 experiments in collaboration with up to 17 prominent experimental groups in quantum science.

Before Sevilla, I worked in Bilbao, at the University of the Basque Country, as a Marie Curie postdoctoral fellow and subsequently with a Ramón y Cajal position and a Staff Scientist position. Earlier, I was a Humboldt Fellow and a Max Planck postdoctoral fellow for 3 and a half years at the Max Planck Institute for Quantum Optics in Garching, Germany. Previously, I carried out my PhD at CSIC, Madrid, and Universidad Autónoma de Madrid. I have 19-year research experience in centers of Spain and Germany, having performed research as well with scientific collaborations in several one- or two-week stays in centers from all continents as Harvard University, ETH Zurich, Google/University of California Santa Barbara, Shanghai University, Tsinghua University, Macquarie University, Walther-Meissner Institut Garching, IQOQI Innsbruck, among others.

Quantum Reinforcement Learning with Quantum Technologies

In this talk, I will review recent research on quantum artificial intelligence and quantum machine learning. More specifically, I will describe results we obtained at the University of the Basque Country and since the past few years at Universidad de Sevilla, on quantum reinforcement learning implemented with quantum technologies. We will show how this kind of algorithm can be useful to outperform standard quantum tomography protocols, in the regime where the quantum resources are constrained (that I coined the “reduced resource scenario” some years ago). We will also revisit quantum implementations with quantum photonic platforms, and more recent results analyzing the effect of decoherence and dissipation on the previous quantum algorithms. Finally, we will also illustrate how decoherence can, in some situations, be beneficial for learning of quantum systems.

Seth Lloyd

MIT

Friday 15, 9:00-9:50

Resume:

His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon’s noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.

Advances in quantum machine learning

Quantum systems can generate patterns in data that are hard to generate classically.  Can they also recognize patterns in data that are hard to recognize via classical machine learning?    This talk presents basic concepts and algorithms in quantum machine learning, including quantum support vector machines (qSVM), quantum generative adversarial networks (qGANs), and quantum neural networks (qNN).   We discuss recent experimental results in realizing quantum machine learning methods.

Adi Makmal

Adi Makmal

Institute for Theoretical Physics
University of Innsbruck

Wednesday 13, 15:30-16:20

Resume:

Adi Makmal received the B.S. degrees in computer science and philosophy from the Hebrew University of Jerusalem, Israel, and the M.S. and Ph.D. degrees from the Weizmann Institute of Science, Israel. She was involved in problems in computational chemistry at the Weizmann Institute of Science. From 2011 to 2016, she held a post-doctoral position with the Group of Quantum Information and Quantum Computation, University of Innsbruck, Austria, where she was involved in quantum walks and artificial intelligence. She is currently with Microsoft.

Qubit-efficient variational quantum algorithms

Despite being designed to address the limitations of the NISQ era, variational quantum algorithms (VQAs) have not yet proven useful. This is primarily due to the detrimental impact of current noise levels and the standard information encoding technique, which requires qubits number that scales linearly with problem size.  Developing new information encoding schemes that are more qubit-efficient may offer solutions to both challenges. In this talk, I will describe some of our recent progress in developing qubit-efficient VQAs [1] and discuss their potential usage within the NISQ era.

[1] Daniel Yoffe, Amir Natan, and Adi Makmal. “A Qubit-Efficient Variational Selected Configuration-Interaction Method.” arXiv preprint arXiv:2302.06691 (2023).

Pankaj Mehta

Boston University

Tuesday 12, 11:10-12:00

Resume:

I am interested in theoretical problems at the interface of physics and biology. I want to understand how large-scale, collective behaviors observed in biological systems emerge from the interaction of many individual molecular elements, and how these interactions allow cells to perform complex computations in response to environmental cues. I am also a part of the BU Bioinformatics Program and the BUMC Center for Regenerative Medicine (CReM) , and the BU Biological Design Center.

Why can we learn with so many parameters?  The geometry and statistical physics of overparameterization

Modern machine learning often employs overparameterized statistical models with many more parameters than training data points. In this talk, I will review recent work from our group on such models, emphasizing intuitions centered on the bias-variance tradeoff and a new geoemetric picture for overparameterized regression.

K. Birgitta Whaley

University of California (Berkeley)

Thursday 14, 9:00-9:50

Resume:

Professor Whaley’s research is at the interfaces of chemistry with physics and with biology. Her work is broadly focused on quantum information and quantum computation, control and simulation of complex quantum systems, and quantum effects in biological systems. Quantum information processing employs superposition, entanglement, and probabilistic measurement to encode and manipulate information in very different ways from the classical information processing underlying current electronic technology. Theoretical research of Professor Whaley’s group in this area is focused in quantum control, quantum information and quantum measurement, analysis and simulation of open quantum systems, macroscopic quantum states and quantum metrology. Specific topics of current interest include quantum feedback control, quantum reservoir engineering, topological quantum computation, and analysis of macroscopic quantum superpositions in interacting many-body systems. Such superposition states, dramatically illustrated by Schrodinger’s famous cat paradox, offer unprecedented opportunities for precision measurements. Professor Whaley’s recent research in quantum biology seeks to characterize and understand the role of quantum dynamical effects in biological systems, with a perspective that combines physical intuition and detailed quantum simulation with insights from various branches of quantum science – quantum physics, molecular quantum mechanics and quantum information.

Interplay between machine learning and quantum computing

We present two examples of the growing interplay between machine learning and quantum computing, one illustrating the use of classical machine learning for quantum computing while the other investigates the role of quantum coherence in quantum machine learning for classical classification problems

The first example addresses the application of machine learning for continuous quantum error correction. We propose a machine learning algorithm for continuous quantum error correction that is based on the use of a recurrent neural network (RNN) to identify bit-flip errors from continuous noisy syndrome measurements. The algorithm is designed to operate on measurement signals deviating from ideal behavior. We analyze continuous measurements taken from a superconducting architecture using three transmon qubits to identify three significant practical examples of these deviations. Synthetic measurement signals for training of the recurrent neural network are generated from these real-world imperfections and the proficiency of the RNN then evaluated when implementing active error correction. The results show that our machine learning protocol is able to outperform the double threshold protocol across all tests, achieving a final state fidelity comparable to the near optimal discrete Bayesian classifier.

The second example investigates the role of coherence in tensor network QML for classical classification problems. While decoherence of qubits is expected to decrease the performance of QML models, there is a possibility that the diminished performance can be compensated for by adding ancillas to the models and accordingly increasing the virtual bond dimension of the models. We investigate the competition between decoherence and addition of ancillas on the classification performance of two models, making a regression analysis of the decoherence effects. We present numerical evidence that the fully-decohered unitary tree tensor network (TTN) with two ancillas performs at least as well as the non-decohered unitary TTN, suggesting that it is beneficial to add at least two ancillas to the unitary TTN, regardless of the amount of decoherence that may be thereby introduced.

Yuhai Tu - IBM T. J. Watson Research Center

Yuhai Tu

IBM T. J. Watson Research Center

Tuesday 12, 9:00-9:50

Resume:

Yuhai Tu graduated from University of Science and Technology of China in 1987. He came to the US under the CUSPEA program and received his PhD in physics from UCSD in 1991. He was a Division Prize Fellow at Caltech from 1991-1994. He joined IBM Watson Research Center as a Research Staff Member in 1994 and served as head of the theory group during 2003-2015. He has been an APS Fellow since 2004 and served as the APS Division of Biophysics (DBIO) Chair in 2017. He is also a Fellow of AAAS.

Yuhai Tu has broad research interests, which include nonequilibrium statistical physics, biological physics, theoretical neuroscience, and most recently theoretical foundations of deep learning. He has made seminal contributions in diverse areas including the flocking theory, growth dynamics of Si-aSiO2 interface, pattern discovery in RNA microarray analysis, quantitative models of bacterial chemotaxis, circadium clock, and the energy-speed-accuracy relation in biological systems.

Stochastic learning dynamics and generalization in neural networks: A statistical physics approach for understanding deep learning

Despite the great success of deep learning, it remains largely a black box. For example, the main search engine in deep neural networks is based on the Stochastic Gradient Descent (SGD) algorithm, however, little is known about how SGD finds “good” solutions (low generalization error) in the high-dimensional weight space. In this talk, we will first give a general overview of SGD followed by a more detailed description of our recent work [1-3] on the SGD learning dynamics, the loss function landscape, and their relationship.

Time permits, we will discuss a more recent work on trying to understand why flat solutions are more generalizable and whether there are other measures for better generalization based on an exact duality relation we found between neuron activity and network weight [4].

[1] “The inverse variance-flatness relation in Stochastic-Gradient-Descent is critical for finding flat minima”, Y. Feng and Y. Tu, PNAS, 118 (9), 2021.

[2] “Phases of learning dynamics in artificial neural networks: in the absence and presence of mislabeled data”, Y. Feng and Y. Tu, Machine Learning: Science and Technology (MLST), July 19, 2021. https://iopscience.iop.org/article/10.1088/2632-2153/abf5b9/pdf

[3] “Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions”, Ning Yang, Chao Tang, and Y. Tu, Phys. Rev. Lett. (PRL) 130, 130 (23), 237101, 2023.

[4] “The activity-weight duality in feed forward neural networks: The geometric determinants of generalization”, Y. Feng and Y. Tu, https://arxiv.org/abs/2203.10736