Paper The following article is Open access

Statistical approach to quantum phase estimation

, , , and

Published 25 November 2021 © 2021 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Alexandria J Moore et al 2021 New J. Phys. 23 113027 DOI 10.1088/1367-2630/ac320d

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/23/11/113027

Abstract

We introduce a new statistical and variational approach to the phase estimation algorithm (PEA). Unlike the traditional and iterative PEAs which return only an eigenphase estimate, the proposed method can determine any unknown eigenstate–eigenphase pair from a given unitary matrix utilizing a simplified version of the hardware intended for the iterative PEA (IPEA). This is achieved by treating the probabilistic output of an IPEA-like circuit as an eigenstate–eigenphase proximity metric, using this metric to estimate the proximity of the input state and input phase to the nearest eigenstate–eigenphase pair and approaching this pair via a variational process on the input state and phase. This method may search over the entire computational space, or can efficiently search for eigenphases (eigenstates) within some specified range (directions), allowing those with some prior knowledge of their system to search for particular solutions. We show the simulation results of the method with the Qiskit package on the IBM Q platform and on a local computer.

Export citation and abstract BibTeX RIS

1. Introduction

Efficient spectral decomposition of large matrices is a key component to many optimization and machine learning algorithms, with applications ranging from factoring and searching algorithms to computational chemistry [1]. On classical computers, spectral decomposition scales super-linearly with the system dimension [2], making it intractable for large problems. Due to the utility of spectral decomposition and its classical limitations, quantum approaches to spectral decomposition and eigenvalue estimation have been pursued [3]. One significant approach is the quantum phase estimation algorithm (PEA) [4]—a means of determining unknown eigenphases of a unitary matrix—which is a key subroutine in a number of quantum algorithms including Shor's factoring algorithm [5], quantum principal component analysis [6], the generalized Grover's search algorithm [7], and quantum simulations [810].

Near-term quantum systems operate in the noisy intermediate-scale quantum (NISQ) regime [11], facing restrictions on both circuit depth and breadth due to decoherence and gate infidelity. Consequently, interest in the traditional PEA [4] and quantum principal component analysis [6] has been channeled toward developments in the iterative PEA (IPEA) [12]—a method which estimates an unknown phase over multiple circuit iterations—allowing for significant reduction in both qubit usage (circuit breadth) and controlled-gate operations (circuit depth). The IPEA has been demonstrated on photonic systems [13]. On the other hand, variational quantum algorithms (VQA)—which use a classical computer to control and optimize the parameters applied in a quantum circuit—have been developed for a variety of problems as they leverage the speedup of quantum algorithm with lower-depth circuits [14, 15].

Here, we introduce a quantum–classical hybrid algorithm combining the PEA with the VQA—which we call the statistical PEA (SPEA)—and show preliminary simulation results on the IBM Q platform with the Qiskit package [16] as well as simulations on a local computer. The method is able to determine any unknown eigenstate–eigenphase pair from a unitary matrix by utilizing hardware intended for the IPEA. Further, the SPEA can be applied repeatedly to obtain a full spectral decomposition. The SPEA may be compared to other variational quantum eigensolvers [1719], the primary difference being other variational eigensolvers work directly on a (Hermitian) matrix encoded as a quantum state using specially designed quantum circuits. The SPEA assumes access to a gate representation of the unitary exponentiation of the state—or assumes simultaneous availability of several copies of the quantum state to approximate the quantum gate à la [6]. In return, the SPEA requires a polynomially-reduced number of (classical) optimization parameters—as it optimizes for a single eigenstate, rather than diagonalize the entire matrix simultaneously—and directly delivers eigenstate–eigenphase pairs (whereas other approaches may allow on-demand generation of eigenstates, but require tomography if knowledge of the state is needed). The SPEA is also able to search for eigenphases within specified ranges, allowing those with some prior knowledge of their system to search for particular solutions, whether ground state (near minimum eigenphase), principle (near-maximal eigenphase), or any other region of interest.

This paper is organized as follows: section 2 reviews the traditional and IPEA and introduces a statistical metric $\mathcal{C}$ for quantifying the proximity of any given input-state to its closest eigenstate. Section 3 describes the SPEA and discusses the connections between the $\mathcal{C}$ factor and the quality (in terms of proximity) of the derived eigenstate–eigenphase pairs (with the derivation details in appendix A). Section 3 also outlines the optimization process for obtaining the eigenstate–eigenphase pairs. Simulation results on different platforms are reported and discussed in section 4; methodology details are provided in appendices B and C. Scaling for larger systems is considered in section 5. We conclude with a discussion on the performance of the SPEA and propose future directions and applications of the method in section 6.

2. Phase estimation algorithms

Traditional PEA implementations, diagrammed in figure 1, take any given unitary $\hat{U}$ and any given eigenstate |ν⟩ of $\hat{U}$ and return the corresponding eigenphase θ where

Equation (1)

The (approximate) eigenphase $\tilde{\theta }\in \left[0,1\right)$ (equivalently, $\in \left[-.5,.5\right)$) may be directly measured on the control qubits (or qudits, when the control is dc -dimensional) of the PEA. The target register is typically unmeasured during the process. For an arbitrary target register input |Φ⟩, the probability of the circuit representing a particular eigenstate |νk ⟩ and the associated eigenphase θk is ${\left\vert \langle {\nu }_{k}\vert {\Phi}\rangle \right\vert }^{2}$. If |Φ⟩ is not itself an eigenstate, the eigenphase retrieved varies each time the PEA circuit is run. The prototypical PEA thus approximates a particular θ in a single trial.

Figure 1.

Figure 1. The traditional PEA using n control dc -dimensional qudits in the control register. The quantum gates are colored in brown and the measurement gates in blue. The target register is highlighted in purple with dashed outline and control register in gray with dotted outline. Note that in the dc > 2 case, the H 'Hadamard' gate acts as a discrete quantum Fourier transform (QFT) gate. Additionally, in the dc > 2 case the control gates acts as MVCGs, applying the gate to the qth power when the control qudit is in state |q⟩. The circuit will estimate the phase ${\tilde{\theta }}_{k}$ of eigenstate |νk ⟩ to precision ${d}_{c}^{-(n)}$. The estimate of eigenphase θk is returned with probability ${\left\vert \langle {\Phi}\vert {\nu }_{k}\rangle \right\vert }^{2}$.

Standard image High-resolution image

The traditional PEA requires large quantum circuits which are often unreliable in the NISQ regime. To overcome hardware constraints, the IPEA was developed. The IPEA significantly reduces circuit depth requirements by approximating a particular θ one qubit (or dc -level qudit) at a time, starting from the least significant qubit (qudit). The IPEA requires a rotation gate—a linear phase across the control register—to 'subtract' off eigenphase information determined in previous iterations (i.e. if the quantum circuit's state before Rz (θR ) is ∑q αq |q⟩|Φq ⟩, then after the rotation gate the quantum circuit's state is ${\sum }_{q}{\enspace \alpha }_{q}{\mathrm{e}}^{-\mathrm{i}q2\pi {\theta }_{R}}\vert q\rangle \vert {{\Phi}}_{q}\rangle $). The IPEA, as the name suggests, requires a number of iterations equal to the number of bits (dits) of precision desired from the eigenphase. Additionally, the input to the target register of an IPEA must either be an eigenstate (and identically prepared each iteration) or the previous iteration's output must propagate forward and serve as the next iteration's input.

An IPEA circuit is diagrammed in figure 2. In the general case, the control qudit may be high-dimensional (dc -level). In this case, the Hadamard gates represent a dc -dimensional quantum Fourier transform gate and the control-$\hat{U}$ gate is a multi-level control gate [20]: when the control state is |q⟩, a ${\hat{U}}^{q}$ gate is applied to the target register. Consider the IPEA in its 'last' iteration's settings (x = 0 in figure 2). When the target register is an eigenstate |Φ⟩ = |ν⟩ and the rotation gate is used to subtract off phase 2πθR = 2πθ, the control dits deterministically collapse to state |0⟩. When either the target input is not an eigenstate and/or θR is not the corresponding eigenphase, the control dits will collapse to |0⟩ with non-unity probability.

Figure 2.

Figure 2. The IPEA. See figure 1 for color-conventions and notes on the MVCG. To retrieve n dits of the eigenphase (i.e. phase precision $\pm 1/{d}_{c}^{n}$), run the circuit with x = n − 1 and θR = 0; the measured control state is the nth base-dc dit of the unknown phase. Proceed to run the circuit for x = n − 2 and set θR according to the previous control results; the measured control state is the (n − 1)th dit. Continue the process iteratively until x = 0 and the entire phase is recovered. Note that the iterative method is diagrammed for a single qudit control, but may be realized with any number of control qudits, similar to the traditional PEA.

Standard image High-resolution image

Indeed, for eigenstate input |ν⟩ with a dc -level control, the final state of the control qudit before measurement is

Equation (2)

where θ is the eigenphase of |ν⟩ and −θR (${\theta }_{R}\in \left[0,1\right)$) is the rotation applied by the rotation gate. The probability of measuring the system in output bin |0⟩ is

Equation (3)

${\mathcal{P}}_{\theta }(-{\theta }_{R})$ goes to one as θR approaches θ, as shown in figure 3. In the most general case, where the target register is an arbitrary (non-eigenstate) input state |Φ⟩ and the rotation gate subtracts off phase 2πθR , the probability that the control qudits will collapse to |0⟩ is

Equation (4)

Figure 3.

Figure 3. Plot of ${\mathcal{P}}_{0}({\Delta}\theta )$ (equation (3)) from ${\Delta}\theta \in \left[-.5,.5\right)$ for various dc . Note that ${\mathcal{P}}_{0}$ has infinite domain with period 1. Probability of the (dc -level) control register of an IPEA collapsing to |0⟩ as a function of difference between the eigenphase θ and the applied rotation θR , ΔθθθR for an eigenstate input. Note that when the applied rotation matches the eigenphase (Δθ = 0), the control collapses to |0⟩ deterministically. Denote the region around Δθ = 0 (from dot to dot) as the central lobe of ${\mathcal{P}}_{0}({\Delta}\theta )$, and the small lobes with local maxima outside of it as the sidelobes. See that the higher the system's dimensionality, the narrower the probability curve's central lobe and the lower local maxima in the sidelobes. Note that dc = 2 has no sidelobes (the probability is monotonic on either side of the central lobe). Also note $\mathcal{P}({\Delta}\theta )=0$ for ${\Delta}\theta ={d}_{c}^{-1}$ and the width of the central lobe is therefore ${\Delta}{\theta }_{\text{FWFM}}=2{d}_{c}^{-1}$.

Standard image High-resolution image

Where $\hat{U}$ is dt -by-dt -dimensional and the target register is dt -dimensional. Appreciate that $\mathcal{C}(\vert {\Phi}\rangle ,{\theta }_{R})=1$ if and only if |Φ⟩ is an eigenstate and θR is its corresponding eigenphase.

3. Statistical approach to PEA

The non-deterministic nature of the IPEA (in the non-eigenstate case) disqualifies the circuit from use as an eigenphase estimator in the standard approach. The SPEA instead considers the probabilistic outputs of the IPEA (and PEA) as valuable information which—when coupled with a classical controller as in figure 4—allows quantum PEA-like hardware to be used in a variational approach to determine any unknown eigenphase–eigenstate pair. The quantum hardware required is that of a traditional PEA with single-dit precision (n = 1) and the rotation gate standard to the IPEA (i.e. an IPEA with x set to 0). The classical controller determines |Φ⟩ and θR which are used in the PEA-type circuit. Multiple trials of the quantum circuit are run to approximate the probability $\tilde{\mathcal{C}}\approx \mathcal{C}(\vert {\Phi}\rangle ,{\theta }_{R})$ (of equation (4)). Note that the PEA-like circuit need only detect two measurement outcomes: |0⟩ and not (|0⟩), further reducing hardware requirements compared to typical high-dimensional PEAs. Treating the estimate $-1\cdot \tilde{\mathcal{C}}(\vert {\Phi}\rangle ,{\theta }_{R})$ as a cost function in an optimization process (making $\tilde{\mathcal{C}}$ the negative cost function), the classical controller adjusts |Φ⟩ and θR , until the quantum circuit near-deterministically returns |0⟩ as the output state. When $\tilde{\mathcal{C}}(\vert {{\Phi}\rangle }^{\ast },{\theta }_{R}^{\ast })\approx 1$, the classical controller has found the (approximate) eigenstate |Φ⟩* and the associated eigenphase ${\theta }_{R}^{\ast }$.

Figure 4.

Figure 4. Diagram of variational classical–quantum system. Classical processes are indicated by double blue lines and quantum processes by single black lines. Quantum gates are shown in brown and measurement gates in blue. The (potentially high-dimensional) PEA-type circuit is simplified from the typical IPEA in that U need not be raised to high orders (${d}_{c}^{x}$) corresponding to desired eigenphase precision and the measurement gate need only distinguish between the |0⟩-state and the not (|0⟩)-state. The (negative) cost function (estimate) $\tilde{\mathcal{C}}$ is returned after a predetermined number of trials of the quantum circuit, approximating a probability. The classical controller optimizes |Φ⟩ and θR based on the (negative) cost function returned by the quantum circuit. Process continues until $\tilde{C}(\vert {\Phi}\rangle ,{\theta }_{R})=1$ or improvement in $\tilde{C}(\vert {\Phi}\rangle ,{\theta }_{R})$ ceases.

Standard image High-resolution image

The quality of the eigenstate |Φ⟩* and eigenphase ${\theta }_{R}^{\ast }$ retrieval can be quantified by ${\mathcal{C}}^{\ast }=\mathcal{C}(\vert {{\Phi}\rangle }^{\ast },{\theta }_{R}^{\ast })$. ${\mathcal{C}}^{\ast }$ can both (1) determine the maximum distance from the eigenphase θR to the nearest eigenphase θk and (2) find the fidelity of |Φ⟩* to actual eigenstate(s). Derivations of both are provided in appendix A.

The (negative) cost function $\mathcal{C}$ acts as a metric for quality of eigenvalue–eigenstate retrieval as shown in appendix A; by finding |Φ⟩ and θR which maximize this metric, we arrive at good estimates for an eigenstate (|Φ⟩) and eigenphase (θR ) pair. Following is the classical algorithm used to maximize $\mathcal{C}$, which is similar to a gradient search algorithm:

  • (a)  
    The classical controller chooses a |Φ⟩ at random
  • (b)  
    The classical controller constructs an orthogonal basis {|Bm ⟩} including |Φ⟩
  • (c)  
  • (d)  
    For all m = 0 to 2dt − 1, we set a = 1 and the following occurs:
    • 1.  
      If mdt then $z=\sqrt{-1}$. Otherwise z = 1.
    • 2.  
      The classical controller generates the new state:
      Equation (5)
      where $\vert A\rangle =\vert {\Phi}\rangle +z\cdot a\cdot (1-{\mathcal{C}}^{\ast })\vert {B}_{m\enspace \enspace \mathrm{m}\mathrm{o}\mathrm{d}\enspace {d}_{t}}\rangle $.
    • 3.  
      |Φ'⟩ is fed to the quantum circuit, the maximum value returned is ${\mathcal{C}}^{\prime \ast }$
    • 4.  
      If ${\mathcal{C}}^{\prime \ast } > {\mathcal{C}}^{\ast }$, then |Φ⟩ = |Φ'⟩ and ${\mathcal{C}}^{\ast }={\mathcal{C}}^{\prime \ast }$. Otherwise |Φ⟩ and ${\mathcal{C}}^{\ast }$ are unchanged.
  • (e)  
    If |Φ⟩ was not updated during step (d), set a = a/2 and repeat step (d).
  • (f)  
    If ${\mathcal{C}}^{\ast }$ is greater than the stopping condition or the maximum run-time has been exceeded, the classical controller concludes and returns |Φ⟩, ${\mathcal{C}}^{\ast }$, and ${\theta }_{R}^{\ast }$. Otherwise the process continues from step (b).

A few observations on the optimization process may be made. For each iteration, at least 2dt distinct input states are used. For each of these input states a set of {θR } is applied (when using the 'standard approach' in step (c)). Initially, the {θR } range from 0 to 1 with coarse resolution; as the optimization proceeds, {θR } will become fine and include phases from a limited region. Notably, we can choose to run the optimization process limiting {θR } to a narrow range of space from the outset. In this way, we may choose to look only for ground state (small θ), principle (large θ), or any other particular solutions to equation (1). In addition, we may eliminate known eigenstates or directions not of interest by excluding them from {|Bm ⟩} (step (b)) each iteration. In this fashion, the SPEA may be used to determine a complete (or partial) spectral decomposition of $\hat{U}$. Finally, we note while the hardware conventional to a PEA is utilized, this system is superior to the original PEA, as it determines both the eigenstate and the eigenphase, given no prior knowledge.

4. SPEA simulation

We test the proposed algorithm on the IBM Q platform and on a local computer. In both cases, a classical computer is used to simulate the $\mathcal{C}$ parameter (of equation (4)) delivered by a quantum circuit. These simulations of a quantum system are ideal: neither the IBM Q nor the local computer simulations include any noise terms i.e. all quantum gates are assumed to operate with perfect fidelity. The IBM Q trials study the convergence of the optimization algorithm to any single eigenstate on two- and four-dimensional systems. The local computer simulations run a full spectral decomposition on a 16-dimensional system with various control levels dc . The local computer simulations also consider the convergence of the optimization algorithm to any single eigenstate for various control levels dc on systems ranging from 8 to 512-dimensional, to provide some indication of the method's scalability.

Both simulations apply the variational algorithm as defined in section 3, with one primary difference: the local computer simulations follows the primary method of step (c) whereas the IBM Q simulations follow the alternative method. The IBM Q simulation runs one measurement with Rz (θR = 0) and applies the eigenphase estimation methodology introduced in the Discussion of [20]—under the (inaccurate first, but increasingly accurate) assumption that the input state is an eigenstate—to obtain a phase estimate θ*. Then, the measurement is run with Rz (θR = −θ*) to obtain the metric $\mathcal{C}$ used for the optimization. By contrast, the local computer's simulations follow the primary method, picking a representative sample of input phases to apply to the Rz gate and selecting the largest $\mathcal{C}$ that arises. The local computer's simulations therefore require more runs of the quantum circuit per trial, but only require two control-qudit detectors: one for the |0⟩ state and one for the not(|0⟩), whereas the IBM Q methodology needs one detector for each control level (|0⟩, |1⟩, ..., |dc − 1⟩). The alternative approach (or some hybrid approach) is generally preferable if the hardware is available for dc detectors.

4.1. Qiskit simulation

On the IBM Q experience platform, we developed our quantum algorithms with Qiskit, the python-based programming package provided by IBM Q which offers all the facilities to design, simulate and execute quantum algorithms on IBM's quantum computers [16]. In this section we present the simulation results of the SPEA on the Qiskit quantum simulator.

Three sets of simulations are run on the IBM Q, one with two-dimensional operator U1 and the other two with four-dimensional operators U2 and U3, the matrix forms of which are shown in appendix B. U1 and U2 are operators directly built with the default gates offered by the IBM Q and U3 is a unitary exponentiation based on the Hamiltonian of the hydrogen molecule generated with Bravyi–Kitaev transformation [21]. The second quantization Hamiltonian of a hydrogen molecule with a bond length 0.74 Å is calculated by the STO-3G minimal basis using PySCF [22] and the transformation is done by OpenFermion [23]. We encode the matrix into the 'operator' class provided by Qiskit [16]. In the simulations of each unitary operation Ui where i = 1, 2, 3, we start with the input states that are good approximations of one of the operator's eigenstates and then move to input states which are nearly equal-distance from every eigenstate. We quantify the distance of the input state |Φ⟩ to its nearest eigenstate |ν⟩ by taking the absolute inner product $\left\vert \langle {\Phi}\vert \nu \rangle \right\vert $ as reported in table 1. In each simulation we run the same input state 20 times and set the maximum iteration number to be 50 (to save the resources) and the stopping condition, which is the difference between the $\mathcal{C}$ factor and 1, to be 10−4. The stopping condition is set so that when it is met we will have a reasonably good approximation of the eigenstate. We then calculate the average number of iterations and standard deviation of the number of iterations required to exceed the stopping condition. Most trials reach the stopping condition before exceeding the iteration limit and give a good approximation of one of the eigenstate–eigenphase pair, as indicated by the low mean phase error reported. The results are shown in table 1.

Table 1. IBM Q Qiskit QASM simulator results. Three unitary operators are simulated on the IBM Q platform. The SPEA is run 20 times starting from each input state. The distance from the input state to the nearest eigenstate is quantified by the absolute inner product of the two vectors (inner product 1 being identical and smaller values indicting greater difference). The table records the mean and standard deviation (S.D.) of the number of iterations needed to reach the stopping condition ($1-\tilde{\mathcal{C}}=1{0}^{-4}$). The average absolute phase error is reported in radians. For each Ui the input states range from good approximations of one of the operator's eigenstates to input states which are nearly equal-distance from every eigenstate. See that the iteration mean tends to increase with decreasing inner product but the ultimate phase error is generally agnostic to the input state difference.

   Iteration 
UnitaryInput stateAbs. inner productMeanS.D.Mean phase error
U1 (0.1951, 0.9808)0.986.202.821.099 × 10−2
(0.3827,0.9239)0.928.153.411.005 × 10−2
(0.7071, 0.7071)0.718.903.341.005 × 10−2
U2 (0, 0, 0.7432, 0.6690)0.995.858.142.083 × 10−2
(0, 0, 0.6690, 0.7432)0.996.710.422.168 × 10−2
(0, 0, 1, 0)0.7117.76.061.663 × 10−2
(1, 0, 0, 0)0.7123.0511.222.167 × 10−2
(0.7071, 0, 0.7071, 0)0.5021.310.712.262 × 10−2
U3 (−0.1379, 0, 0, 0.9904)0.991.150.361.885 × 10−2
(0, 0.7807, 0.6247, 0)0.991.10.31.508 × 10−2
(0, 1, 0, 0)0.714.354.171.414 × 10−2
(0.7071, 0, 0, 0.7071)0.624.151.011.570 × 10−2
(0.5774, 0.5774, 0, 0.5774)0.5121.511.062.199 × 10−2

For each operator U1, U2, U3, input states which are initially close to an eigenstate (input states with a large absolute inner product) have lower required iteration number than those which are initially far from all eigenstates (low absolute inner product). Appreciate that the eigenstate converged to is non-deterministic, as the optimizer itself is non-deterministic due to randomness added by the random orthogonal basis in step (b). In other words, added randomness may converge the input state to an eigenstate other than the closest eigenstate. The mean phase error recorded in table 1 is calculated by taking the absolute value of the difference between the eigenphase of the converged input state and the true eigenphase of the eigenstate that the input state converged to. As the input state can converge to different eigenstates in the simulation, we report the absolute phase error rather than the error percentage. No correlation between the phase error and the absolute inner product is apparent, indicating the quality of the final eigenphase–eigenstate pair is agnostic to the proximity of the initial input state to any eigenstate. Variations in mean phase error are likely a function of which particular eigenphase–eigenstate pair was converged to. During the simulation of U3 with an input state of equal weight combination of all the eigenstates—i.e. the hardest input state to converge to an eigenstate—there are few cases that the iteration limit is reached and the simulation did not reach the stopping condition. This can usually be fixed by increasing the iteration limit.

In summary, these results indicate that the SPEA method is capable of delivering high-quality estimates of eigenphase–eigenstate pairs with no prior knowledge of the operator's eigenstates, in the case of both arbitrary (U1, U2) and physically relevant (U3) operators. The quality of the estimates is not influenced by prior system knowledge; however, the resources required to deliver an eigenstate–eigenphase pair may be reduced with prior knowledge.

4.2. Full spectral decomposition

The statistical approach differs from some other variational approaches [17] in that it does not diagonalize the input state matrix, but solves for only one eigenphase–eigenstate pair. This allows for significant reduction in the number of parameters (and iterations) needed to perform the optimization. However, as shown below, a complete spectral decomposition is realizable. As a representative case, we consider the 16-by-16 Hamiltonian ${\mathcal{H}}_{{\mathrm{H}}_{2}\mathrm{O}}$ for the water molecule H2O with the H–O–H angle at 104.5° and the bond length at 1.0 a.u. given in appendix C [24]. The Hamiltonian is converted to a unitary exponentiation, ${U}_{{\mathrm{H}}_{2}\mathrm{O}}={\text{e}}^{\text{i}{\mathcal{H}}_{{\mathrm{H}}_{2}\mathrm{O}}}$, and the matrix's spectral decomposition is simulated with the SPEA on a local computer.

The simulation is run for various control levels dc until 120 successful spectral decompositions are achieved. Each of the 16 eigenphases are retrieved in a random order. The optimization runs until ${\mathcal{C}}^{\ast }\geqslant {C}_{\text{goal}}$. If the optimizer is unable to reach Cgoal, the process for that eigenphase will conclude so long as ${\mathcal{C}}^{\ast }\geqslant {C}_{\text{req}}$. If Creq is not met, the entire spectral decomposition is abandoned and the trial is classified as failed. Generally, Cgoal is achieved for the first 10 eigenvalues and the latter 2 to 6 eigenvalues must settle at a lower value (due to small cumulative errors). Results are recorded in table 2 and plotted in figure 5.

Table 2. Statistics for 120 successful trials of complete spectral decomposition of ${U}_{{\mathrm{H}}_{2}\mathrm{O}}$ matrix. Cgoal is the $\tilde{C}$ value the optimizer attempts to reach, and generally does reach for at least the first 10 (of 16) eigen-estimates. Creq is the $\tilde{C}$ value the optimizer is required to reach for all eigen-estimates, else the trial is abandoned. For successful trials, the mean Fidelity and standard deviation are reported, as well as the mean eigenphase error (in radians). Note that dc = 2 was run on two different Cgoal, Creq levels for comparison.

dc Cgoal Creq TrialsFailsFidelityPhase error
MeanS.D.MeanS.D.
20.9990.95120770.9847.29 × 10−3 2.84 × 10−2 10.7 × 10−2
20.9950.9120130.96610.68 × 10−3 4.34 × 10−2 17.3 × 10−2
30.9950.9120170.9816.62 × 10−3 3.12 × 10−2 9.14 × 10−2
40.9950.9120320.9865.32 × 10−3 2.40 × 10−2 7.36 × 10−2
50.9950.9120300.9867.10 × 10−3 1.86 × 10−2 4.98 × 10−2
60.9950.9120260.9894.80 × 10−3 1.53 × 10−2 3.80 × 10−2
70.9950.9120600.9917.05 × 10−3 1.37 × 10−2 3.65 × 10−2
80.9950.91201320.9926.13 × 10−3 1.20 × 10−2 3.54 × 10−2
Figure 5.

Figure 5. (Left axis; black; circles) mean fidelity achieved for optimization simulation at control level dc , with error bars from the 25th to 75th percentile. (Right axis; blue; squares) mean phase error (per phase) for optimization simulation at control level dc , with error bars from the 25th to 75th percentile.

Standard image High-resolution image

To determine the fidelity of the spectral decomposition, the retrieved eigenphase–eigenstate pairs, (θk , |νk ⟩) were used to create the matrix,

Equation (6)

Letting $M={U}_{{\mathrm{H}}_{2}\mathrm{O}}^{{\dagger}}{U}_{\text{retrieved}}$, the fidelity is defined as

Equation (7)

following the average fidelity definition of [25] where n is matrix dimension (i.e. n = 16). The reported phase error is the average absolute phase error over all 16 phases,

Equation (8)

Note that the dc = 2 was run for two different sets of Cgoal and Creq. Increasing these values increased the optimization failure rate, but also improved decomposition fidelity and reduced the average eigenphase error. The high failure rate suggests superior results will be achieved by increasing dc , the number of control levels, over increasing ${\mathcal{C}}^{\ast }$ (analogous to Cgoal), when possible. This is expected, as increasing dc leads to a narrower cost function. Overall, these results indicate both the viability of the SPEA for full and partial eigenphase recovery and provides an example of a quantum algorithm which benefits from working with high-dimensional quantum states, i.e. qudits.

4.3. Scaling to higher dimensions

The classical controller is a modified gradient descent algorithm and in this work has not been optimized for scaling to very high-dimensional systems. Here the controller serves to demonstrate that the function $\mathcal{C}$—the output of the quantum circuit—is usable as a (negative) cost function for any classical optimizer. Nevertheless, we consider randomly generated systems from 8 to 512 dimensions under control levels dc = 2, 3, 4, 8 to briefly observe how the SPEA may scale.

The classical controller was set to run with no value for the stopping condition. Instead, the value of ${\mathcal{C}}^{\ast }$ over the course of a single optimization was recorded for iterations 5, 20, 100, and 500 for randomly generated system U and a randomly selected starting state. One iteration of this process is considered one trial. Over all trials, the average values of $1-{\mathcal{C}}^{\ast }$ are reported in table 3. Each control level dc = 2, 3, 4, and 8 uses the same set of random systems and starting points. Different trials may drive the optimizer to different eigenstates, due to inherent randomness in the controller. After iteration 500, the absolute difference between the final estimated eigenphase and the true eigenphase is recorded. This average final absolute phase error is reported.

Table 3. For du -by-du dimensional randomly generated unitary systems U, the quantity $1-{\mathcal{C}}^{\ast }$ is averaged over multiple trials at for SPEA systems with varying control levels dc . Quantity $1-{\mathcal{C}}^{\ast }$ is reported at iterations 5, 20, 100, 500 over the course of an optimization run with no stopping conditions. After iteration 500, the mean absolute phase error is reported in radians.

du dc TrialsAverage $1-{\mathcal{C}}^{\ast }$  
520100500Mean final phase error
821003.84 × 10−2 8.39 × 10−3 0.69 × 10−4 3.43 × 10−5 0.04 × 10−2
34.36 × 10−2 6.63 × 10−3 0.38 × 10−4 1.63 × 10−5 0.02 × 10−2
44.71 × 10−2 3.97 × 10−3 0.23 × 10−4 1.01 × 10−5 0.01 × 10−2
86.37 × 10−2 2.84 × 10−3 0.17 × 10−4 0.48 × 10−5 0.005 × 10−2
1621004.16 × 10−2 8.04 × 10−3 1.29 × 10−4 11.76 × 10−5 0.33 × 10−2
36.38 × 10−2 7.45 × 10−3 0.67 × 10−4 3.24 × 10−5 0.04 × 10−2
48.93 × 10−2 6.51 × 10−3 0.48 × 10−4 1.73 × 10−5 0.02 × 10−2
814.95 × 10−2 6.93 × 10−3 0.17 × 10−4 0.82 × 10−5 0.01 × 10−2
3221005.03 × 10−2 9.11 × 10−3 1.48 × 10−4 12.79 × 10−5 0.54 × 10−2
311.12 × 10−2 10.46 × 10−3 1.4 × 10−4 18.31 × 10−5 0.45 × 10−2
418.44 × 10−2 12.17 × 10−3 1.16 × 10−4 10.86 × 10−5 0.17 × 10−2
831.71 × 10−2 22.15 × 10−3 0.95 × 10−4 2.34 × 10−5 0.01 × 10−2
6421008.74 × 10−2 10.48 × 10−3 1.79 × 10−4 27.74 × 10−5 1.12 × 10−2
323.27 × 10−2 25.76 × 10−3 2.27 × 10−4 20.49 × 10−5 0.50 × 10−2
433.65 × 10−2 96.27 × 10−3 2.32 × 10−4 38.2 × 10−5 0.63 × 10−2
849.44 × 10−2 154.09 × 10−3 1.54 × 10−4 5.97 × 10−5 0.04 × 10−2
128210016.45 × 10−2 24.17 × 10−3 2.26 × 10−4 34.45 × 10−5 1.12 × 10−2
335.69 × 10−2 175.24 × 10−3 4.59 ⋅ 10−4 65.51 × 10−5 0.98 × 10−2
444.98 × 10−2 135.54 × 10−3 4.68 × 10−4 54.59 × 10−5 0.78 × 10−2
858.69 × 10−2 137.72 × 10−3 4.78 × 10−4 59.57 × 10−5 0.41 × 10−2
256210025.29 × 10−2 156.88 × 10−3 5.37 × 10−4 60.39 × 10−5 0.68 × 10−2
343.44 × 10−2 124.12 × 10−3 9.71 × 10−4 110.67 × 10−5 0.64 × 10−2
451.71 × 10−2 193.64 × 10−3 11.82 × 10−4 136.93 × 10−5 0.59 × 10−2
865.86 × 10−2 400.17 × 10−3 20.37 × 10−4 119.06 × 10−5 0.48 × 10−2
51222632.12 × 10−2 146.61 × 10−3 19.61 × 10−4 117.08 × 10−5 0.44 × 10−2
349.53 × 10−2 288.44 × 10−3 22.36 × 10−4 227.46 × 10−5 0.44 × 10−2
458.17 × 10−2 398.08 × 10−3 51.21 × 10−4 239.06 × 10−5 0.37 × 10−2
873.14 × 10−2 411.68 × 10−3 26.01 × 10−4 173.41 × 10−5 0.28 × 10−2

Under an ideal optimization, ${\mathcal{C}}^{\ast }$ will approach 1 as rapidly as possible. It should be noted (see appendix A) that ${\mathcal{C}}^{\ast }$ should not be directly compared across different values of dc . The higher the value of dc the lower ${\mathcal{C}}^{\ast }$ will be for a given estimated eigenstate–eigenphase pair. See that, for a given du , the final phase error decreases as dc increases, whether or not ${\mathcal{C}}^{\ast }$ increases. Regardless, see that for du ⩽ 64, higher values of dc may cause a lower initial value for ${\mathcal{C}}^{\ast }$, but generally achieves a lower ${\mathcal{C}}^{\ast }$ near the end of the optimization process. The final ${\mathcal{C}}^{\ast }$ values for all dc for a given du are within an order of magnitude in all cases. All tested dimensions are able to achieve a final ${\mathcal{C}}^{\ast }$ better than 0.997. These results suggest the system is capable of scaling favorably to high-dimensional systems. The discussion section addresses improvements to the classical controller for realizing this scalability.

5. Discussion

Section 4 demonstrates that $\mathcal{C}$ of equation (4) can be used effectively as a cost function to determine an eigenphase–eigenstate pair of an arbitrary unitary system despite the simplistic nature of the control algorithm (outlined in section 3). No prior system knowledge is required for successful convergence. The process was reported here for fundamental unitary gates, unitary systems relevant to computational chemistry, and entirely arbitrary choices of U. For large U, concern over the coherence time of the operations which need to take place can be reduced by systems which decompose U into a set of smaller gates whose operations may take place concurrently [15, 26, 27]. Given the controlled-nature of the PEA, a similar advantage may not be available to the SPEA, or its implementation may be more complex. However, the SPEA only requires a single control gate—differentiating it from the traditional PEA which has a number of operations scaling with 1/p where p is the desired eigenphase precision—allowing the SPEA to be more competitive with other circuit-depth sensitive variational approaches.

Other advantages common to variational algorithms can be applied to the SPEA—notably the SPEA classical controller need not know the complete $\mathcal{O}(2d)$ parameters defining the input quantum state, only a (potentially) smaller number of parameters necessary to prepare the state, avoiding the N-representability problem. Like a more select number of variational solvers, the SPEA offers self-verification of its results (the proximity of $\mathcal{C}$ to one) [27]. The SPEA is not designed around systems which are representable with a polynomial number of terms [15, 27]; the SPEA may be more suitable for non-representable systems than other variational approaches.

As $\mathcal{C}$ is the expectation value of a quantum circuit's output, the algorithm as outlined in this work asks for an increasingly large number of runs of the quantum circuit in order to estimate $\mathcal{C}$ with sufficient accuracy as $\mathcal{C}$ approaches one. Such requirements can be mitigated with a more sophisticated classical controller, which may employ more efficient sampling techniques (such as a Metropolis–Hastings algorithm) or make better use of the information available from $\mathcal{C}$ (e.g. driving $\mathcal{C}(\vert \psi \rangle ,\theta )\to 1$ is closely related to forcing $\mathcal{C}(\vert \psi \rangle ,\theta +\frac{1}{{d}_{c}})\to 0$). The issue can also be ameliorated by substituting the control-U for control-Uk (for k > 1) as $\mathcal{C}$ approaches one, decreasing $\mathcal{C}$ for a given estimated eigenphase–eigenstate pair and thus allowing it to be estimated with fewer calls to the quantum circuit. The quality of the eigenstate–eigenphase retrieval may be more accurately defined by using Uk or (rather than following appendix A) by fitting the distribution of $\mathcal{C}(\vert \psi \rangle ,\theta +x)$ over x to the ideal distribution of ${\mathcal{P}}_{\theta }(x)$.

6. Conclusion

In this work, we have proposed a novel statistical variational approach (SPEA) to the quantum PEA. From the probabilistic output of a PEA circuit using non-eigen input states, we have defined a statistical metric $\mathcal{C}$ indicating the proximity of any given input state to the nearest eigenstate and develop an optimization process that can variationally retrieve all the eigenstate–eigenphase pairs of a given unitary operator. The SPEA takes advantage of the hardware intended for the IPEA and therefore requires no novel quantum hardware development. The main disadvantage of the SPEA is the non-deterministic nature of the measurements requires running the quantum circuit repeatedly for each measurement setting. However, in the near-term era, repeated runs of a quantum circuit per measurement is already the norm, due to noise and imperfect gate fidelity. One of the main advantages of the SPEA compared to the PEA and IPEA is the ability to systematically find both the eigenstates and associated eigenphases, rather than just the eigenphases.

The simulations on the IBM Q platform with Qiskit proves the feasibility of applying the SPEA on standard quantum computation platforms. On the local computer, the full spectral decomposition of the operator generated from the water molecule Hamiltonian demonstrates the viability of the SPEA for applications in quantum chemistry. The ability to retrieve eigenstates and efficiency (in terms of low iterations requirement) of this method shows the improvement to the original PEA methods and offers the clear potential to work with larger physical and chemical systems. Further, SPEA is demonstrated to work with arbitrary U and does not require that the U be expanded into a sum of simpler operations. As compared to variational approaches that search only for the ground state, the SPEA is able to search for any eigenstate with no modifications to the quantum circuit.

Future work includes improving the optimization process with a more sophisticated algorithm for the classical controller. In addition to improving efficiency and failure rate, this may also improve the accuracy of the eigenphase–eigenstate retrieval as well as the fidelity of the full spectral decomposition. The efficiency and the viability of our methods enable us to simulate more complex systems in quantum chemistry. Future work also includes implementing this method on real computational systems provided by the IBM Q and also on a photonic platform with high-dimensional control qudit capabilities.

Acknowledgments

We would like to acknowledge the financial support by the National Science Foundation (NSF) under Award No. 1839191-ECCS and under Award No. 2124511-CHE.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

Appendix A.: Quality of eigenphase–eigenstate retrieval

Preliminary note: all phases $\theta \in \left[0,1\right)$. The difference between two phases can be found via the function

Equation (A.1)

where $d({\theta }_{a},{\theta }_{b})\in \left[-.5,.5\right)$. In short, as the phase wraps around the phase difference also wraps around e.g. $d(\frac{1}{8},\frac{3}{4})=\frac{3}{8}$, not $\frac{-5}{8}$. And $d(\frac{3}{4},\frac{1}{8})=\frac{-3}{8}$, not $\frac{5}{8}$. Like usual differences, d(θa , θb ) = −d(θb , θa ). Appreciate that—for ${\mathcal{P}}_{0}$ of equation (3)—${\mathcal{P}}_{0}(d({\theta }_{a},{\theta }_{b}))={\mathcal{P}}_{0}({\theta }_{a}-{\theta }_{b})\forall {\theta }_{a},{\theta }_{b}$.

Here, we quantify the quality of the eigenstate |Φ⟩* and eigenphase ${\theta }_{R}^{\ast }$ using ${\mathcal{C}}^{\ast }=\mathcal{C}(\vert {{\Phi}\rangle }^{\ast },{\theta }_{R}^{\ast })$. When ${\mathcal{C}}^{\ast }$ is greater than the largest (non-global) local maximum of ${\mathcal{P}}_{\theta }$ (of equation (3)), then ${\theta }_{R}^{\ast }$ must be within the primary lobe of ${\mathcal{P}}_{\theta }$ (examples shown in figure 3). That is, when

Equation (A.2)

we are within the primary lobe of ${\mathcal{P}}_{\theta }$. Let ${\theta }_{{k}^{\ast }}$ be the eigenvalue closest to θR and define ${\delta }^{\ast }\equiv d({\theta }_{{k}^{\ast }},{\theta }_{R})$. When equation (A.2) is true, then ${\mathcal{P}}_{0}({\delta }^{\ast })\geqslant {\mathcal{P}}_{0}({\theta }_{k}-{\theta }_{R})\forall k$ and

Equation (A.3)

As ${\mathcal{P}}_{0}$ is symmetric and monotonic within the primary lobe,

Equation (A.4)

Therefore the estimated eigenphase ${\theta }_{R}^{\ast }$ is within $\pm {\mathcal{P}}_{0}^{-1}({\mathcal{C}}^{\ast })$ of the nearest eigenphase (whenever equation (A.2) is met).

Now to quantify the eigenstate estimate. Define a Δ-eigenstate |νΔ⟩ as any superposition of eigenstates where the corresponding eigenphases are within ±Δ of ${\theta }_{R}^{\ast }$

Equation (A.5)

(and where ${\sum }_{m}{\left\vert {\alpha }_{m}\right\vert }^{2}=1$). That is, |νΔ⟩ is a superposition of eigenstates (indexed {m}Δ) that are nearly degenerate: the corresponding eigenphases are all within 2Δ of one another. Proceeding from equation (4) (whenever equation (A.2) holds),

Equation (A.6)

Letting ${\sum }_{k\in {\left\{m\right\}}_{{\Delta}}}{\left\vert \langle {\nu }_{k}\vert {\Phi}\rangle \right\vert }^{2}=f$,

Equation (A.7)

The estimated eigenstate |Φ⟩* matches some Δ-eigenstate (as defined by equation (A.5)) with fidelity f given by equation (A.7) (whenever equation (A.2) holds and $\vert {\Delta}\vert \in [0,\frac{1}{{d}_{c}}]$).

Appendix B.: Details for the IBM Q SPEA calculations

On IBM Q we realize an SPEA with a four-dimensional control register by using two qubits (the top two rails) as controls. The target is either two- or four-dimensional, using the bottom one or two rails, respectively.

We list out the three operators in matrix form used in our simulations on the IBM Q accompanied by the eigenstates of each matrix as well as showing how we achieved these matrices with the Qiskit.

In the following we use the rotation-Z gate as defined by Qiskit [16]:

Equation (B.1)

as well as the phase gate:

Equation (B.2)

and the Hadamard gate:

Equation (B.3)

The first operator U1 is a single qubit rotation-Z gate with θ = π/2,

Equation (B.4)

The second operator is a two qubit operation achieved by a phase gate P(θ = π/4) acting on the first qubit and a rotation-Z gate RZ(θ = π/2) sandwiched between two Hadamard gates H, acting on the second qubit. The matrix form is

Equation (B.5)

with eigenstates

Equation (B.6)

The gate representations of the two operators can be found in figure B1.

Figure B1.

Figure B1. Gate representation of the unitary operators applied in the simulation. (A) Represents operator U1, a two-dimensional operator applied to a single qubit. (B) Represents operator U2, a four-dimensional operator applied to two qubits. RZ is the rotation-Z gate, P is the phase gate and H is the Hadamard gate. When a control qubit is present the RZ gate and the P gate become controlled gates. The Hadamard gates will serve as their own inverse and therefore do not need to be implemented as controlled gates.

Standard image High-resolution image

For the third operators we start with the lowest energy Hamiltonian of the hydrogen molecule generated with Bravyi–Kitaev transformation in STO-3G basis,

Equation (B.7)

Then we take the exponential of the Hamiltonian to generate our unitary operator

Equation (B.8)

The exponential of the Hamiltonian is done with the scipy.linalg.expm in the scipy python package [28]. The eigenstates are

Equation (B.9)

We design the phase estimation algorithms that work with up to two qubits in the control register and four qubits in total as shown in figure B2. Due to the restrictions in the qubit numbers, the simulations on the IBM Q are focused on the lower dimensional systems, i.e. one or two qubits in the target register. The rotation RZ gates are applied to each qubit in the control register after the controlled-Ui operations but before inverse Fourier transform. Every time the quantum algorithm is called to generate a new $\mathcal{C}$ factor (following the 'alternative method' in step (c) of section 3), the algorithm runs twice: the first time the RZ gates are set to zero and the phase factor θR is calculated statistically; the second time the upper RZ gate applies phase −θR and the lower gate applies phase −2θR , together acting as a −θR -rotation would on a dc = 4 qudit system. The classical optimization process described in section 3 is implemented using python. The basis set {|Bm ⟩} is generated using the Gram–Schmidt methods with the input vector plus a set of linearly independent vectors obtained from a randomly generated unitary matrix. As a deviation from how the algorithm is described in the main text, the search step (d) a factor is set to 1/2 in step (d) and is doubled in step (e) (rather than halved) if $\mathcal{C}$-factor is not updated, up to seven times. The optimization concludes when the $\mathcal{C}$ factor meets the stopping condition which means the input state is converged to an eigenstate, or when the maximum iteration time is exceeded.

Figure B2.

Figure B2. Illustration of the four-level control (dc = 4) SPEA circuit implemented using Qiskit. H is the Hadamard gate, RZ is the rotation-Z gate, P is the phase gate, and M represents a measurement. The top two rails are the control qubits and realize a four-dimensional control register. The bottom two rails represents the target register. The initialize block prepares the input state differently throughout the optimization process. Ui represents the unitary the SPEA is running. For i = 2, 3 we implement the circuit as diagrammed. For i = 1, the unitary is two-dimensional and we use only a single target rail. The three control gates together realize a multi(four)-level control gate as described in the main text. The two RZ-gates together realize a realize four-level Rz (−θR ) as described in the main text. The double line at the bottom represents the classical information retrieved from the measurement gates (a total of 2 bits).

Standard image High-resolution image

Appendix C.: Full H2O matrix

The Hamiltonian of the water molecule with the H–O–H angle at 104.5° and the bond length at 1.0 a.u. is calculated by STO-3G minimal basis using PySCF [22] and chemistry package provided by the Qiskit [16]. The 16-by-16 Hamiltonian of the water molecule [24] used for the local computer's spectral decomposition simulations is sparse with nonzero entries,

Equation (C.1)

The exponential of the Hamiltonian is done with the MATLAB expm function.

Please wait… references are loading.