I am a quantum scientist.
I work at the interface between physics, computer science, mathematics, and philosophy of science.
I am currently a Quantum Postdoctoral Fellow at the Simons Institute for the Theory of Computing at the University of California, Berkeley, got my Ph.D. from Freie Universität Berlin in Jens Eisert's group in 2020, and was a Hartree Postdoctoral fellow @ the Joint Center for Quantum Information and Computer Science of the University of Maryland & NIST from 2021 until 2024.
I studied physics, philosophy, and mathematics @ Konstanz, Oxford and Munich.
I am always happy to talk about my work and to answer any questions you might have at mail@dhangleiter.eu.
What are the computational limitations on predicting and understanding the physical world? How are these limitations manifested in practice? Vice versa, can we harness the complexity of quantum systems as computing devices in order to overcome those limitations and learn about unknown physics?
These are the big guiding questions of my research. While solving specific problems I try to keep in mind the overarching motivation for what I am doing. The questions I ask are centered around the role of computing in physics and the role of physics in computing.
It is an insight of Paul Benioff, Richard Feynman, David Deutsch and others in the late 20th century that we can harvest precise control of quantum systems computationally: Controlled quantum devices allow for efficient simulations and computations in cases in which any classical algorithm would require exponential time. But what does that mean in practice? Today, we are at an exciting point. We—or, rather, extremely clever physicists and engineers in well organized labs—can now actually control individual quantum systems to the extent required for quantum computers. Not on the required scale quite yet, but it's a start and we are getting there. Now, you will be wondering: Can we already perform computations that cannot be done on any of our classical supercomputers then?
This is precisely one of the questions I have been working on over the last years. In particular, I have worked extensively on developing so-called quantum random sampling schemes. Implementations of such schemes on the available hardware—so the argument—demonstrates that the speedup of quantum computers is real: They are really hard to simulate on our classical supercomputers and we have deep reasons in complexity theory to believe that this will get exponentially harder, as we build larger and larger quantum computers.
If you want to learn more about this, I recommend the following papers of mine.
What is the computational power of quantum systems?
To make quantum computations possible in our real, dirty, and noisy world we need to protect them from this world, which destroys quantum information when left alone.
To do this, we can use quantum codes which encode quantum information redundantly and in a distributed way in order to protect it from local noise processes.
But the problem is: once we have encoded quantum information as logical qubits, it is much harder to manipulate it—by design so!
In fact, using only local manipulation of the logical qubits does not allow the implementation of universal quantum computation.
But such transversal operations still allow us to do classically hard computations—see my work with the Harvard team.
At the same time, different codes are suited better to different experimental platforms.
So maybe we can use different quantum codes with different natural operations for different computational primitives using different experimental platforms.
We could then build boxes which do one thing really well. To build a universal quantum computer we can then flexibly stitch those boxes together.
To assess this possibility, I do what we have termed "fault-tolerant compiling"—develop new codes with interesting natural gate sets, develop efficient implementations of quantum algorithms with natural gate sets, and develop efficient implementations on actual hardware.
If you want to learn more about my work on fault-tolerant compiling of quantum algorithms, I recommend the following resources.
Which quantum computations can be made naturally fault-tolerant?
Is it possible to verify the outcomes of a quantum computation efficiently using only classical computation? After all, we expect that no classical computer can solve all the problems that quantum computers can solve. So there is no way for us to just compare the results of two computations. This is one of the deep questions in the theory of quantum computing. We now know that this is indeed possible using a complicated protocol designed for verifying a quantum computation done by some remote server. But is verification possible in simple scenarios, for example, when we only receive samples from the output of a quantum computation? We might also be able to exploit some uniquely quantum properties of a quantum computation, for example, that we can look at a quantum state from different angles or—technically speaking—measure it in different bases. In which scenarios is verification possible and in which scenarios is it not? What are the minimal resources required in a given scenario?
This question has brought me into the field of quantum computing to begin with and it still fascinates me today. Over the course of the past years, I have studied that question specifically in the context of analog quantum simulations and quantum random sampling. If you want to read more about it, I recommend the following papers and talk.
Closely related to the verification of quantum simulations and computations is the question, how we can most resource-efficiently characterize quantum devices. While in verification we are just interested in a Yes/No answer whether or not the computation succeeded, in quantum characterization we actually want to know what happened in the device. Most importantly: what, if anything, went wrong? Such answers are crucial to engineering quantum devices and making them more and more precise. What exactly we want to learn about the device will depend a lot on the specific setting.
While we have many tools available for characterizing digital computations, this is much less true for analog simulators. The setting I am currently most interested in are systems with particle-number conserving dynamics. I think of such systems as being analogous to digital quantum computations where we compose individual local gates, but now we compose the dynamics of individual particles. For a detailed study on how to characterize the dynamics of such particles in practice I recommend the following paper and talk.
Another notion that is closely related to verification and characterization is learning. Learning tasks can have many different goals, but what unites them is that we want to find a model that fits some data which is given to us. We don't require that this model is true in that it has a theoretical grounding and uniquely describes the data within our theory. Learning is intrinsically data-driven: We simply want to reproduce certain characteristic features of the data. For example, in a classification task, we might want to divide our data, say images, into different categories and then, given a new image, decide into which category it fits. Or we might want to be able to generate new images that are similar to the ones given to us. An interesting question is whether quantum algorithms can be better at learning tasks than classical ones. We showed that the answer to this question is a resounding "Yes"—albeit for a very artificial task (it's theory, that's allowed!).
How can we characterize quantum devices and verify quantum computations?
Verification
Characterization
Learning
One of the threads that weave through my research is the relation of and interplay between computing and physics.
Computers are physical objects and therefore bound by the physical laws.
But physics and the predictions we make using the physical laws also rely on computations using those devices.
Therefore what we can know about the physical world seems to depend crucially on what we can compute.
That this connection is a deep one is manifest in the Church-Turing thesis which posits that what is computable in the physical world is precisely what is computable by a Turing machine.
Extending this to the claim that what is efficiently computable is also precisely that which is efficiently computable by a Turing machine is the content of the Extended Church-Turing thesis formulated for example by Bernstein & Vazirani.
Quantum computing violates the extended Church-Turing thesis.
But how does it do so? What are the physical mechanisms of quantum as opposed to classical physics which enable the speedup?
Can we quantitatively connect the presence of 'quantum resources' in or certain of a quantum system to the runtime of classical algorithms simulating that system?
How deep does the connection between physics and computing run after all viewed in the light of quantum physics and computing?
These are among the most intriguing and subtle questions I work on.
While we know of various different physical properties that are necessary for quantum speedups—such as entanglement and contextuality—I am most interested in the interference phenomenon.
Interference has already been pointed out by Feynman as a key mechanism that prevents classical simulations in his talks that started the field of quantum simulation and computing.
In such classical simulations, in particular so-called Monte-Carlo methods, interference often causes a so-called sign problem.
To what extent the sign problem can be thought of as a physical property of a system and how it relates to other physical properties is one of the guiding questions of this research.
I like to approach this question—of course—from a computational viewpoint.
This take is well illustrated in the following papers and talk.
What are the physical mechanisms that obstruct classical simulations?
In science we work towards achieving various different epistemic goals---predicting future phenomena, explaining why things are the way they are, understanding the mechanisms governing the world. In order to do so, we use many different methodologies ranging from developing theories, modelling phenomena in great detail and devising extremely simple and highly idealized 'toy' models to performing computer simulations and experiments. All of those different methodologies serve different purposes in the scientific process. For example, when developping detailed models we might primarily aim to make accurate predictions about future experiments, about the weather tomorrow, or the climate in a century. Using toy models, we might want to isolate and understand specific physical mechanisms or phenomena.
Quantum simulation and computing are newcomers to this 'methodological map'. While it seems clear that they are and will be even more so important tools for science, it is unclear what exactly their function can be. This is an intriguing question because the questions answered by some quantum simulations seem to be more like the questions answered by an experiment, while the questions answered by others seem to be more like those answered by a computer simulation.
So what is the epistemic goal of performing quantum computing and simulation in science? And how do these novel methodologies compare to the ones that are well used and tested?
Can we justify new and different types of inferences through quantum computing and simulation?
These are some of the questions I look at from a philosophy of science perspective, in particular with regards to their ability to improve our understanding of the world.
(How) does quantum computing impact the way we do science?
24/11 I wrote an op-ed in the German outlet of the Scientific American, Spektrum der Wissenschaft on the relevance of research on quantum advantage.
23/12 Our experimental demonstration of fault-tolerant circuits on many logical qubits published in Nature has received a lot of attention, e.g., on Quanta magazine, Spektrum.de, Scott Aaronson's blog shtetl-optimized, and phys.org.
23/10 Quanta magazine covered our result on complexity phase transitions generated by entanglement as part of a story on quantifying quantumness. JQI interviewed me for a story on this result.
22/12 The science journalist Dan Garisto interviewed me about our philosophy book on analogue simulation for an article titled "What we talk about when we talk about qubits".22/08 Science magazine interviewed me on a recent result in quantum random sampling. My take has been picked up by Spektrum der Wissenschaft, Wired.it, European Scientist, and Deutsche Welle.
21/11 QuICS published a long profile about my work.
20/08 The Berlin-based museum Futurium interviewed me about the role quantum computers could play to fight COVID-19.
A complete list of my publications can be found on the arXiv and under the ORCID ID 0000-0002-4766-7967.