Skip to article frontmatterSkip to article content

A sample-based phase algorithm for quantum computers

Institute of Applied Physics, Friedrich Schiller University Jena, 07745 Jena, Germany
Max Planck School of Photonics, 07745 Jena, Germany

Abstract

In this work, we propose a quantum algorithm that applies arbitrary phase profiles f(x)f(x) to a quantum register ψ=xψ(x)x\ket{\psi}=\sum_x \psi(x) \ket{x} meaning that it transforms the state of the register to ψ=xψ(x)eif(x)x\ket{\psi'}=\sum_x \psi(x)\,e^{if(x)} \ket{x}. The important property of the algorithm is that instead of using a unitary operator that applies the phase directly to the initial state, it uses a secondary register initialized as ϕ=xϕ(x)x\ket{\phi}=\sum_x \phi(x) \ket{x} to apply the phase. This algorithm uses copies of this secondary state to apply a phase profile f(x)f(x) to ψ\ket{\psi} that is a function of the secondary state as f(x)=αϕ(x)2f(x)=\alpha\abs{\phi(x)}^2 where α is an arbitrary coefficient. Therefore, this algorithm allows us to apply different phases to a quantum register using different input states as the secondary register. It is also important to note that the algorithm derived in this work is agnostic about the input states ψ\ket{\psi} and ϕ\ket{\phi}, therefore, the implementation of the algorithm itself does not change for applying different phase profiles f(x)f(x) and we only need to change the input state ϕ\ket{\phi} to apply the corresponding phases.

2Introduction

Quantum computers can be used to store and process information. For particular problems such as prime number factorization Shor (1997), search and optimization Grover (1996), specific quantum algorithms have been discovered that can solve the problems with lower computational complexity compared to the best-known counterpart classical solutions Montanaro (2016). Such quantum algorithms leverage the properties of quantum systems such as superposition and entanglement that enable them to achieve performance boost and memory efficiency Nielsen & Chuang (2010).

In this work, our goal is to find an efficient quantum algorithm for applying arbitrary phase profiles to quantum registers. This problem translates to transforming the initial state of a quantum register as

ψ=xψ(x)x\ket{\psi} = \sum_x \psi(x) \ket{x}

to

ψ=xψ(x)eif(x)x,\ket{\psi'} = \sum_x \psi(x) \, e^{if(x)} \ket{x},

for any real-valued function f(x)f(x).

Phase profiles are particularly important for simulating physical systems. The time evolution of many systems can be modeled using the change of phases of the initial state. In the following, we will discuss a quantum and a classical case where the time evolution is described via changes in the phases of the initial state.

We start by discussing the unitary evolution in quantum mechanics as an example. Let us assume we have a quantum mechanical system described using the state ψ\ket{\psi} and the Hamiltonian of the system is H^\Hat{H}, where the Hamiltonian is a function of the momentum operator as

H^=H(P^).\Hat{H} = H(\Hat{P}).

This assumption means that the Hamiltonian commutes with the momentum operator

[H(P^),P^]=0,\left[ H(\Hat{P}), \hat{P} \right] = 0,

thus we can find a set of common basis states p\ket{p} for them, and the Hamiltonian can be diagonalized as

H^=pH(p)pp,\Hat{H} = \sum_p H(p) \ket{p}\bra{p},

where p\ket{p} are the momentum basis states.

The initial state ψ\ket{\psi} can also be described in the momentum basis as

ψ=pΨ(p)p,\ket{\psi} = \sum_p \Psi(p) \ket{p},

where Ψ(p)\Psi(p) are the probability amplitudes of the state being in the p\ket{p} momentum basis state.

The evolution of the system for time tt can be modeled using the unitary operator

U^=ei1H^t.\Hat{U} = e^{-i\frac{1}{\hbar}\Hat{H}t}.

This means we can simulate the time evolution of the initial system by applying a phase profile to the initial probability amplitudes as

ψ=pΨ(p)pψ=Uψ=pΨ(p)ei1H(p)tp,\ket{\psi} = \sum_p \Psi(p) \ket{p} \,\,\longrightarrow\,\, \ket{\psi'} = U\ket{\psi} = \sum_p \Psi(p) \, e^{-i\frac{1}{\hbar}H(p)t} \ket{p},

where the phase profile in this case is

f(p)=1H(p)t.f(p) = -\frac{1}{\hbar}H(p)t.

Phases also play an important role in the evolution of classical systems. One example in the field of optics is the propagation of a light beam in the Fresnel approximation Saleh & Teich (1991).

Let us assume a field u0(x)u_0(x) to be the initial field distribution of an electromagnetic wave in the transverse plane where the direction of the motion of the beam is the zz axis.

The propagation of the light beam after the distance zz in the Fresnel approximation can be simulated via the following steps:

  1. Applying Fourier transformation to u0(x)u_0(x)

    U0(kx)=FT{u0(x)}.U_0(k_x) = \mathcal{FT} \biggl\{ u_0(x) \biggr\}.
  2. Multiplying the calculated Fourier spectrum by the Fresnel transfer function HFH_F. The Fresnel transfer function consists of phase terms

    HF(kx)=eikx22k0z,H_F(k_x) = e^{-i \frac{k_x^2}{2k_0} z},

    where k0=2πλk_0 = \frac{2\pi}{\lambda} is the wave number of the field and λ is the wavelength.

    Applying this transfer function to the spectrum, we will have

    U(kx;z)=HF(kx)U0(kx)=U0(kx)eikx22k0z.U(k_x; z) = H_F(k_x) \, U_0(k_x) = U_0(k_x) \, e^{-i \frac{k_x^2}{2k_0} z}.
  3. Finally, the propagated field at the distance zz is calculated via an inverse Fourier transformation

    u(x;z)=FT1{U(kx;z)}.u(x; z) = \mathcal{FT}^{-1} \biggl\{ U(k_x; z) \biggr\}.

In this example, the simulation was modeled using the combination of Fourier transformations and applying a phase profile corresponding to the function

f(kx)=kx22k0z.f(k_x) = - \frac{k_x^2}{2k_0} z.

These were two examples to show the importance of the phase problem for the simulation of the evolution of physical systems over time. Therefore, a generic algorithm that can apply an arbitrary phase profile f(x)f(x) to signals stored in the memory of a computer can simulate various physical systems.

As we will see in this thesis, applying arbitrary phases in quantum computers is not as straightforward as in classical computers. The central reason for this difficulty is rooted in the way information is stored in quantum memories. We will discuss the details of the problem in the following sections.

2.1The outline of the thesis

In writing this thesis, we assume the reader possesses the standard knowledge of quantum mechanics and a basic knowledge of quantum computers. However, we will review the important basic concepts of quantum computers before moving on to solving the problem in order to make a solid ground for the upcoming derivations.

This thesis report is structured as follows. In Section 3, we review the basics of quantum computers and describe how we generally model problems and store the information in quantum memories. In Section 4, we introduce the phase problem using the modeling language we established in the previous section. In this section, we will also discuss the considerations we need to pay attention to in the modeling of the problem which will allow us to generalize the problem in several iterations.

Next in Section 5, we will start solving the problem defined in the previous section. As we will see, we will derive the complete solution in two major steps. In the first step, we will find a quantum phase operator that works in certain approximated conditions. We call this operator the partial phase operator. Next, we will use this operator in a larger algorithm which enables us to overcome the limitations of the previous approximations. This algorithm will be able to apply arbitrary phase profiles and we call it the sample-based phase algorithm. This section also includes the analysis of the error and the accuracy of the algorithm as well as an actual implementation of the algorithm using the standard circuit model of quantum computers.

After deriving the algorithm, we will choose a specific problem to simulate using the algorithm and characterize the errors in that case in Section 6. This will allow us to demonstrate and verify the derivations of the previous section.

3Theory

The goal of this section is to introduce quantum computers and to provide the necessary mathematical framework for the derivations that come in the next sections. We start by discussing concepts such as qubits and quantum memories and introduce the approach we use to encode signals and fields in the memory of a quantum computer.

3.1Quantum memories

Quantum memories are one of the cores of quantum computers; they play the role of storing the values of our initial parameters of a problem. A quantum computer processes that information to calculate the solution to the problem. The information processing in quantum computers is done via unitary operations acting on the memory. To use quantum computers to solve a problem, we have to find a way to store the initial parameters in the quantum memory and find a sequence of unitary operations that can calculate the solution of the problem using that initial values.

In order to understand this process, we need to study the main properties of quantum memories and qubits. The model of quantum computers we introduce here will allow us to continue with the derivation of the phase algorithm in the upcoming sections. We start by introducing the units of a quantum memory which is a qubit.

Qubits

The units of quantum memories are qubits. A qubit is defined as a 2-dimensional quantum system. This means it is modeled as a vector γ\ket{\gamma} living in a two-dimensional Hilbert space. We know from linear algebra that a two-dimensional Hilbert space is spanned via two orthogonal basis states; we name one of the basis states 0\ket{0} and the other one 1\ket{1}.

And because any linear superposition of the basis states is a valid state for the qubit, in general, a qubit γ\ket{\gamma} can be described as the following sum of the basis states

γ=ψ(0)0+ψ(1)1,\ket{\gamma} = \psi(0) \ket{0} + \psi(1) \ket{1},

where ψ(0)\psi(0) and ψ(1)\psi(1) are two complex probability amplitudes of the qubit being in the corresponding basis state 0\ket{0} and 1\ket{1}.

When studying quantum algorithms, we usually do not assume any physical system that is used for the realization of the qubit and define them in the abstract way as we have done here. This enables us to study quantum algorithms agnostic of the physical implementation of the quantum computer. Therefore, algorithms should in principle work on any realization of quantum computers that can provide such qubits. However, one can think of many physical systems such as the spin of an electron or the polarization of a photon to be described as a two-dimensional quantum system as we defined for the qubit.

We note that the normalization principle of quantum states forces the equation γγ=1\braket{\gamma}{\gamma} = 1. Also, the choice of the global phase is arbitrary because phases does not change the features of a quantum state. Hence, we have to consider reducing two real-value degrees of freedom from the two complex numbers ψ(γ)\psi(\gamma). However, we will see in the next section that when we put qubit units together to form a quantum memory, this consideration will quickly become negligible as the degrees of freedom of the compound memory will increase exponentially with the number of qubits; therefore, decreasing a constant two real numbers from the overall degrees of freedom can be neglected.

Compound memories

Similar to classical computers where a large memory is made of units (bits), we can put several qubits, as units of memory in a quantum computer, together to form a compound memory. We use the terms register or memory to describe a system consisting of several units of memory.

Let us assume we have a quantum register consisting of nn qubits. We name these qubits γ0\gamma_0, γ1\gamma_1, \cdots, γn1\gamma_{n-1}. We know from quantum mechanics that a compound system is described using a Hilbert space which is the tensor product of the spaces of the individual sub-systems. This means that if we put these qubits together, the basis states of the compound memory will be

γn1γ1γ0,\ket{\gamma_{n-1}}\cdots\ket{\gamma_1}\ket{\gamma_0},

where each γi\gamma_i can be either 0 or 1. Note that we can count the number of these basis states which is N=2nN=2^n. Furthermore, the state of the register can be in a superposition of all these basis states

ψ=γn1=01γ1=01γ0=01ψ(γn1,,γ1,γ0)γn1γ1γ0,\ket{\psi} = \sum_{\gamma_{n-1}=0}^{1}\dots\sum_{\gamma_1=0}^{1}\sum_{\gamma_0=0}^{1} \, \psi(\gamma_{n-1},\dots,\gamma_1,\gamma_0) \ket{\gamma_{n-1}}\cdots\ket{\gamma_1}\ket{\gamma_0},

where ψ(γn1,,γ1,γ0)\psi(\gamma_{n-1},\dots,\gamma_1,\gamma_0) is the probability amplitude of the register being in the corresponding compound basis state. We can see that in this case, the state of the system is described using N=2nN=2^n complex numbers ψ(γn1,,γ1,γ0)\psi(\gamma_{n-1},\dots,\gamma_1,\gamma_0) and in general, the qubits can be in an entangled state.

Similar to the note we mentioned when introducing qubits, we should also decrease two real-valued (equivalent to one complex-valued) degrees of freedom of describing the state of the register because of the normalization principle and the global phase. This means that the number of complex parameters needed to describe a quantum register consisting of nn qubits is

N1=2n1.N - 1 = 2^n - 1.

As we can see, the consideration coming from normalization and the global phase can be neglected in various cases because of the exponential relation N=2nN=2^n.

The basis index and binary encoding

Now that we have our compound memory, we can choose a binary encoding to map each basis state

γn1γ1γ0\ket{\gamma_{n-1}}\cdots\ket{\gamma_1}\ket{\gamma_0}

to a unique index

γ.\ket{\gamma}.

We can choose between different encodings based on the nature and the requirements of the problem.

If we only work with positive indices, we can choose unsigned encoding as

γ=j=0n12jγj,\gamma = \sum_{j=0}^{n-1} 2^{j} \gamma_{j},

which results in the range of γ[0,N1]\gamma \in [0, N-1] for possible values of γ.

Whereas in order to have both negative and positive indices, we can use two’s-complement encoding as

γ=2n1γn1+j=0n22jγj.\gamma = -2^{n-1} \gamma_{n-1} + \sum_{j=0}^{n-2} 2^{j} \gamma_{j}.

In this case, γ can represent any integer in the range of γ[N2,N21]\gamma \in [-\frac{N}{2}, \frac{N}{2}-1].

After we choose a binary encoding for the basis state indices, we can reinterpret (17) as

ψ=γψ(γ)γ,\ket{\psi} = \sum_{\gamma} \psi(\gamma) \ket{\gamma},

where the summation on γ is over all possible values corresponding to the chosen encoding. For example in the case of two’s-complement encoding, it translates to

ψ=γ=N2N21ψ(γ)γ.\ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} \psi(\gamma) \ket{\gamma}.

In any case, there are N=2nN=2^n probability amplitudes that describe the quantum register.

Sampling and storing a signal in quantum registers

Now that we have chosen a binary encoding for the basis states, the next step is to store a signal in the quantum register.

Let us assume we have sampled a signal u(x)u(x) at discrete points using the uniform sampling method; it means that we have chosen a fixed sampling period Δx\Delta x and extracted the function values at points xγ=γΔxx_\gamma = \gamma\Delta x.

Any choice of normalized ψ(γ)\psi(\gamma) complex numbers corresponds to a valid state for the quantum register in (23). This means that we can store N=2nN=2^n sample points of a normalized complex signal u(x)u(x) by initializing the quantum register in a state with probability amplitudes ψ(γ)=u(xγ)\psi(\gamma) = u(x_\gamma) as

ψ=γu(xγ)γ.\ket{\psi} = \sum_{\gamma} u(x_\gamma) \ket{\gamma}.

This initialization allows us to store N=2nN=2^n numbers in an nn-qubit quantum register. Compared to classical registers, we have achieved exponential efficiency in memory usage; because in classical registers, the number of degrees of freedom of the memory grows linearly with the number of the units (bits). Whereas in quantum registers, this growth is exponential with the number of units (qubits) as described in the previous sections.

The initialization problem

In the previous section, we mentioned that we can store N=2nN=2^n complex numbers in an nn-qubit quantum register by initializing the register in a state corresponding to our desired signal.

We have to mention that the task of initialization is not trivial for all desired signals Schuld & Petruccione (2021). By definition, quantum computers are able to initialize a register in a non-entangled reset state such as

ψ=000n,\ket{\psi} = \underbrace{\ket{0}\dots\ket{0}\ket{0}} _{n},

where all qubits start in the 0\ket{0} state.

The initialization problem deals with generating any desired state as described in (25) starting from the trivial reset state in (26). It is not straightforward how to initialize the register in any arbitrary state. We need to derive specific initializers for our input states or use the existing initializers from the literature. For example, we can name the Kitaev-Webb algorithm for initializing a Gaussian state Kitaev & Webb (2009).

The goal of this work is not to deal with the initialization problem; therefore, we generally assume the existence of an initializer for our desired initial input signals.

4Introducing the Problem

After discussing the preliminary theory, we are ready to state the problem and derive the solution in the following sections.

As briefly mentioned in the introduction, our goal is to apply arbitrary phase profiles to signals stored in a quantum computer.

Let us assume we have a quantum register with nn qubits initialized in an arbitrary state

ψ=γ=N2N21ψ(γ)γ,\ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} \psi(\gamma) \ket{\gamma},

where N=2nN=2^n is the number of basis states of the memory and γ\ket{\gamma} are the computational basis states. We might often leave out the upper and lower summation bounds to indicate the sum over all possible values as

ψ=γψ(γ)γ.\ket{\psi} = \sum_\gamma \psi(\gamma) \ket{\gamma}.

We would like to be able to apply an arbitrary (real-valued) phase profile f(γ)f(\gamma) to this register, meaning to transform the state ψ\ket{\psi} to ψ\ket{\psi'}

ψψ\ket{\psi} \longrightarrow \ket{\psi'}

such that

ψ=γψ(γ)eif(γ)γ.\ket{\psi'} = \sum_\gamma \psi(\gamma) \, e^{i f(\gamma)} \ket{\gamma}.

One general approach to do that is to derive an operator UU for each phase profile f(γ)f(\gamma) that applies the desired transformation on the state ψ\ket{\psi} as

ψψ=Uψ=γψ(γ)eif(γ)γ\ket{\psi} \longrightarrow \ket{\psi'} = U\ket{\psi} = \sum_\gamma \psi(\gamma) \, e^{i f(\gamma)} \ket{\gamma}

Previously, we showed how this transformation can be done for the special case of polynomial phase profiles where f(γ)=αγmf(\gamma)=\alpha \gamma^m for arbitrary α and mm values using polynomial phase operators Davani (2023)Cholsuk et al. (2023)Davani (2023). More specifically, we derived operators Um(α)U_m(\alpha) that apply the following transformation on the register

ψ=Um(α)ψ=γψ(γ)eiαγmγ.\ket{\psi'} = U_m(\alpha) \ket{\psi} = \sum_\gamma \psi(\gamma) \, e^{i \alpha \gamma^m} \ket{\gamma}.

We call UmU_m polynomial phase operators because they apply a phase to the wavefunction of the quantum register which is polynomial with respect to the basis index γ as f(γ)=αγmf(\gamma)=\alpha \gamma^m.

We also note that the UmU_m operator can be implemented using O(nm)\mathcal{O}(n^m) number of gates Davani (2023).

While polynomial phase profiles are very useful in many applications such as simulating Fresnel-approximated beam propagation (as discussed in the referenced work Cholsuk et al. (2023)Davani (2023)), there are many applications where we would like to apply arbitrary phase profiles and not just polynomial phases.

In principle, arbitrary phase profiles can be applied using the polynomial phase operators by Taylor-expanding f(γ)f(\gamma) at an arbitrary point γ\overline{\gamma} and working with an approximated Taylor expansion. The Taylor expansion can be written as

f(γ)=k=0f(k)(γ)k!(γγ)k.f(\gamma) = \sum_{k=0}^{\infty} \frac{f^{(k)}(\overline{\gamma})}{k!} \, (\gamma-\overline{\gamma})^k.

If we keep the terms up to the mm-th order. We will end up with

f~(γ)=k=0mf(k)(γ)k!(γγ)k,\Tilde{f}(\gamma) = \sum_{k=0}^{m} \frac{f^{(k)}(\overline{\gamma})}{k!} \, (\gamma-\overline{\gamma})^k,

which is a summation that includes polynomial terms γk\gamma^k where k{0,1,,m}k \in \{0,1,\dots,m\}.

This means that the phase eif~(γ)e^{i \Tilde{f}(\gamma)} can be applied by combining UkU_k operators and the total computational complexity will be O(nm)\mathcal{O}(n^m).

We can see that for phase profiles where a low-order Taylor approximation is not accurate, we have to lower the approximation error by increasing mm. This is not ideal because for very large mm we the computational complexity increases as

limmO(nm).\lim_{m\to\infty} \mathcal{O}(n^m).

This means that applying arbitrary phase profiles using this method is not efficient in practice.

The solution we propose in this work for applying arbitrary phase profiles is based on a different approach. We will discuss the approach in the following.

4.1The proposed solution

Instead of deriving an operator UU for each phase profile f(γ)f(\gamma), we introduce a secondary register and propose an algorithm that uses the state encoded in the secondary register to apply arbitrary phase profiles to a primary register. This approach is inspired by the work of Lloyd et al. Lloyd et al. (2014) and the further investigation of that work by Kimmel et al. Kimmel et al. (2017) on the topic of sample-based Hamiltonian simulation.

Assuming we have a primary nn-qubit quantum register (we call it the register AA) initialized in an arbitrary state as

ψ=γψ(γ)γ.\ket{\psi} = \sum_\gamma \psi(\gamma) \ket{\gamma}.

and a secondary register (we call it the register BB) with the same number of qubits as

ϕ=γϕ(γ)γ.\ket{\phi} = \sum_\gamma \phi(\gamma) \ket{\gamma}.

What we propose is a quantum operator that acts on both registers together and applies a phase to the primary register (with a defined error) based on the secondary register as

ψ=γψ(γ)γψ=γψ(γ)eiαϕ(γ)2γ.\ket{\psi} = \sum_\gamma \psi(\gamma) \ket{\gamma} \,\longrightarrow\, \ket{\psi'} = \sum_\gamma \psi(\gamma) \, e^{i \alpha\abs{\phi(\gamma)}^2} \ket{\gamma}.

We will discuss the exact formulation of the algorithm and the derivations in the following sections.

If we compare this to the approach discussed in (31), we realize that this algorithm corresponds to applying a phase profile equal to

f(γ)=αϕ(γ)2.f(\gamma) = \alpha \abs{\phi(\gamma)}^2.

This gives us the freedom of applying arbitrary phase profiles to the state ψ\ket{\psi} as long as we can initialize the secondary register to a state corresponding to the probability amplitudes ϕ(γ)\phi(\gamma).

We must take several steps to derive the complete algorithm. First, we will find an operator that performs the transformation in (38) but with the assumption that α is small. To distinguish this case from the general case, we use a different letter Δ for the phase coefficient as α=Δ\alpha=\Delta. We define the smallness assumption such that Δ20\Delta^2 \approx 0 therefore the square and higher powers of Δ are negligible in our derivations. Thus, the operator we derive in this case will apply the phase profile

f(γ)=Δϕ(γ)2f(\gamma) = \Delta \abs{\phi(\gamma)}^2

to our primary register for small Δ.

We call this operator the partial phase operator. It uses one copy of the secondary state ϕ\ket{\phi} to perform the transformation.

Next, we use the partial phase operator iteratively using multiple copies of the state ϕ\ket{\phi} and apply the phase steps consecutively to the primary memory mm times to apply an overall phase of

f(γ)=mΔϕ(γ)2.f(\gamma) = m\Delta \abs{\phi(\gamma)}^2.

This enables us to overcome the limitation of the assumption Δ20\Delta^2 \approx 0 and to apply arbitrary phases as

f(γ)=αϕ(γ)2,f(\gamma) = \alpha \abs{\phi(\gamma)}^2,

where α=mΔ\alpha=m\Delta.

We call the final iterative algorithm the sample-based phase algorithm because it uses the samples of the encoded phase profile in a quantum register ϕ\ket{\phi} to apply arbitrary phases to a primary register.

We furthermore analyze the error introduced by this iterative approach and derive a relation for the required number of iterations mm based on the parameters of the problem such as α.

Now that we have introduced the main idea of this work, we progress to modeling the problem mathematically in the next section.

4.2Preliminary mathematical modeling

In this section, we start modeling the problem and discussing the preliminary points we must consider to do so. After the discussion, we restate the problem in its most generalized form and use this generalized form for our derivations in Section 5.

As discussed in the previous section, we assume that we have a primary register ψ\ket{\psi} that we want to apply a phase to using the phase profile stored in a secondary register ϕ\ket{\phi}.

At first, we realize that this means we should search for an operator that acts on both registers together to be able to transform the primary register based on the state of the secondary register. In the ideal case, we can search for an operator that applies the following operation

U(ψAϕB)=ψAϕB,U\left( \ket{\psi}_A\otimes\ket{\phi}_B \right) = \ket{\psi'}_A\otimes\ket{\phi'}_B,

where ψ\ket{\psi'} is the desired final state given as

ψ=γψ(γ)eiαϕ(γ)2γ,\ket{\psi'} = \sum_\gamma \psi(\gamma) \, e^{i \alpha\abs{\phi(\gamma)}^2} \ket{\gamma},

and ϕ\ket{\phi'} is the final state of the secondary register. We do not expect any specific condition for this state and any state fulfills our requirement of the algorithm as long as the output at the primary register is ψ\ket{\psi'}. The subscripts AA and BB indicate the primary and secondary registers however we might leave them out in the cases where it is unambiguous from the context.

It might not be possible to find such an operator as described in (43) because of the assumption that the output state after applying the UU operator is separable and not entangled. In general, the final state can be entangled therefore we should model the operation of UU as

U(ψAϕB)=ΩAB.U\left( \ket{\psi}_A\otimes\ket{\phi}_B \right) = \ket{\Omega'}_{AB}.

In this case, we are looking for an operator UU that results into a final state Ω\ket{\Omega'} with the requirement that

trBΩΩψψ,\tr_B \ket{\Omega'}\bra{\Omega'} \approx \ket{\psi'}\bra{\psi'},

where the trB()\tr_B(\cdot) operator traces over the secondary register and calculates the density matrix describing the primary register after the operation of UU.

In this case, although the final state of the registers as a whole might be entangled, we are interested in the primary register being in the state as in (44). Therefore the operator we search for still solves the problem if the final density matrix of the primary register is in our desired state.

Note that this generalized modeling also includes the first case because if the final ΩAB\ket{\Omega'}_{AB} is non-entangled, we can again write

ΩAB=ψAϕB,\ket{\Omega'}_{AB} = \ket{\psi'}_A\otimes\ket{\phi'}_B,

and we will have

trBΩΩ=ψψ.\tr_B \ket{\Omega'}\bra{\Omega'} = \ket{\psi'}\bra{\psi'}.

In our derivations, we reserve the possibility of the final state not being exactly equal to ψψ\ket{\psi'}\bra{\psi'}, and that is why we have an approximation sign instead of equality in (46). We use the trace distance to quantify this error. The trace distance of two density matrices ρ and σ is defined as

D(ρ,σ)=12ρσ1,\text{D}(\rho,\sigma) = \frac{1}{2} \norm{\rho-\sigma}_1,

where the norm operator is the trace norm defined as

C1=trC=trCC\norm{C}_1 = \tr \abs{C} = \tr \sqrt{C^\dagger C}

for any operator CC.

This means that in our case, we use the trace distance between the output state trBΩΩ\tr_B \ket{\Omega'}\bra{\Omega'} and the desired state ψψ\ket{\psi'}\bra{\psi'} to measure the error

δ=D(trBΩΩ,ψψ).\delta = \text{D}\Big( \tr_B \ket{\Omega'}\bra{\Omega'}, \,\, \ket{\psi'}\bra{\psi'} \Big).

After this preliminary modeling, we recognized that we need to use the density matrix representation to derive the operator because the output state can be entangled and we can only calculate the output of the primary memory after the UU operator by tracing over the secondary register.

In order to have a consistent mathematical model throughout our derivations, we restate the problem in the most general form using density matrices and use this modeling throughout this work.

4.3Restatement of the problem in the general case

Let us now restate the problem after the consideration from the previous section and define all the important parameters for the following derivations.

We start by defining the input states. We assume we have two quantum registers each consisting of nn qubits. We name the primary register ρ and the secondary register σ. In the most general form, the registers can be in the states

ρ=γ,μργμγμ,\rho = \sum_{\gamma,\mu} \rho_{\gamma\mu} \ket{\gamma}\bra{\mu},

and

σ=γ,μσγμγμ,\sigma = \sum_{\gamma,\mu} \sigma_{\gamma\mu} \ket{\gamma}\bra{\mu},

where γ and μ represent the basis indices of the nn-qubit registers.

We assume the initial states of the registers are not entangled and introduce the state SS describing the initial state of the complete memory consisting of the two registers together as

S=ρσ.S = \rho \otimes \sigma.

Our desired final state for the primary register is the state ρ\rho' with the following definition

ρ=γ,μργμeiα(σγγσμμ)γμ.\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} e^{i\alpha (\sigma_{\gamma\gamma} - \sigma_{\mu\mu})} \ket{\gamma}\bra{\mu}.

Note how the above definitions are a generalization of the pure state case; because if we assume pure states as inputs, we will have

ρ=ψψ,ψ=γψ(γ)γσ=ϕϕ,ϕ=γϕ(γ)γρ=ψψ,ψ=γψ(γ)eiαϕ(γ)2γ\begin{align} \rho &= \ket{\psi}\bra{\psi}, & \ket{\psi} &= \sum_\gamma \psi(\gamma) \ket{\gamma} \\ \sigma &= \ket{\phi}\bra{\phi}, & \ket{\phi} &= \sum_\gamma \phi(\gamma) \ket{\gamma} \\ \rho' &= \ket{\psi'}\bra{\psi'}, & \ket{\psi'} &= \sum_\gamma \psi(\gamma) \, e^{i \alpha\abs{\phi(\gamma)}^2} \ket{\gamma} \end{align}

In this case, we have σγγ=ϕ(γ)2\sigma_{\gamma\gamma} = \abs{\phi(\gamma)}^2.

Going back to the general density matrix case, we search for an operator UU acting on the two registers ρ and σ such that tracing over the second register at the output results in the desired state ρ\rho'. Mathematically speaking, The operator UU should apply the following transformation

F=USU=U(ρσ)U,F = USU^\dagger = U(\rho\otimes\sigma)U^\dagger,

where taking the partial trace over the secondary register, we end up with the state

trBFρ.\tr_B{F} \approx \rho'.

The schematic representation of the operator is shown in Figure 1.

The schematic figure of the partial phase operator

Figure 1:The schematic figure of the partial phase operator

We note that the UU operator should be independent of the input states ρ and σ, so that applying different phase profiles does not require changing the operator UU itself. We want to apply different phase profiles using this operator only by changing the input state σ.

In order to analyze the output of the algorithm, we define the subsystems at the output of the primary register as

FA=trBF,F_A = \tr_B F,

and at the output of the secondary register as

FB=trAF.F_B = \tr_A F.

Then we define the error at the output of the algorithm as the trace distance of the final state of the primary state and the desired state as

δ=D(FA,ρ)=12trFAρ.\delta = \text{D}(F_A, \rho') = \frac{1}{2} \tr\abs{F_A - \rho'}.

Now that we have restated the problem in the general case, we are ready to move forward into the derivation.

In order to provide an overview of the derivation steps, we will briefly discuss the outline of the upcoming derivations in the following and start the derivations afterward.

4.4Outline of the derivations

In the following sections, we will take these steps to derive the sample-based phase algorithm described in Section 4.3.

  1. First, we derive the operator UU that performs the desired transformation described in (57) and (58). As we will see, in order to derive such an operator, we will need to use an approximation. In this approximation, we choose the phase coefficient α in (55) to be small.

    As previously mentioned, we use the letter Δ for the phase coefficient in this case to distinguish it from the non-approximated case. The assumption Δ20\Delta^2 \approx 0 allows us to use Taylor expansions to simplify the derivations. We will find the UU operator in this case with the condition that it satisfies the (58) up to the first-order approximation in Δ. We call this operator the partial phase operator.

  2. The next step is to find an implementation for the UU operator previously derived using standard quantum computer gates. It is important to show that the UU operator can be straightforwardly implemented in practice. We will find an efficient implementation of UU using CNOT and controlled phase gates.

  3. And finally, we use the partial phase operator to implement an iterative algorithm that applies the phase described in (55) without the approximation of small α. In this algorithm, we repeat the partial phase operator mm times with small phase steps Δ to apply an arbitrary phase coefficient of

    α=mΔ.\alpha = m \Delta.

    We call this final algorithm the sample-based phase algorithm because it needs copies of the secondary state σ and uses them in iterations to apply the complete phase. We finish the derivations by analyzing the errors of this algorithm as a function of the parameters mm and Δ.

5The Derivations

5.1The partial phase operator

Let us start deriving the partial phase operator. The initial state of the memory including both primary and secondary registers is

S=ρσ.S = \rho \otimes \sigma.

In several parts of our upcoming derivations, we use the matrix representation of density matrices; therefore it is useful to look at the input states in their matrix form in the computational basis.

ρ=[ρ00ρ01ρ0,N1ρ10ρ11ρ1,N1ρN1,0ρN1,1ρN1,N1]N×Nσ=[σ00σ01σ0,N1σ10σ11σ1,N1σN1,0σN1,1σN1,N1]N×N\rho = \begin{bmatrix} \rho_{00} & \rho_{01} & \dots & \rho_{0, N-1}\\ \rho_{10} & \rho_{11} & \dots & \rho_{1, N-1}\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{N-1,0} & \rho_{N-1,1} & \dots & \rho_{N-1,N-1}\\ \end{bmatrix}_{N \times N} \quad \sigma = \begin{bmatrix} \sigma_{00} & \sigma_{01} & \dots & \sigma_{0, N-1}\\ \sigma_{10} & \sigma_{11} & \dots & \sigma_{1, N-1}\\ \vdots & \vdots & \ddots & \vdots\\ \sigma_{N-1,0} & \sigma_{N-1,1} & \dots & \sigma_{N-1,N-1}\\ \end{bmatrix}_{N \times N}

We use the unsigned encoding for the basis indices which means the indices are in the range γ,μ[0,N1]\gamma,\mu \in [0,N-1]; however, this choice is only because of the convenience of the notation and the operators we derive solve the problem for general encoding.

We also introduce another tool to simplify our notation. We define L=N1L=N-1 in order to simplify the index N1N-1 appearing in the matrix representations. This means the input states are written as

ρ=[ρ00ρ01ρ0Lρ10ρ11ρ1LρL0ρL1ρLL]N×Nσ=[σ00σ01σ0Lσ10σ11σ1LσL0σL1σLL]N×N\rho = \begin{bmatrix} \rho_{00} & \rho_{01} & \dots & \rho_{0L}\\ \rho_{10} & \rho_{11} & \dots & \rho_{1L}\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0} & \rho_{L1} & \dots & \rho_{LL}\\ \end{bmatrix}_{N \times N} \quad \sigma = \begin{bmatrix} \sigma_{00} & \sigma_{01} & \dots & \sigma_{0L}\\ \sigma_{10} & \sigma_{11} & \dots & \sigma_{1L}\\ \vdots & \vdots & \ddots & \vdots\\ \sigma_{L0} & \sigma_{L1} & \dots & \sigma_{LL}\\ \end{bmatrix}_{N \times N}

The initial state SS thus will have the following block matrix form

S=ρσ[ρ00σρ01σρ0Lσρ10σρ11σρ1LσρL0σρL1σρLLσ]N2×N2.S = \rho\otimes\sigma \begin{bmatrix} \rho_{00}\,\sigma & \rho_{01}\,\sigma & \dots & \rho_{0L}\,\sigma\\ \rho_{10}\,\sigma & \rho_{11}\,\sigma & \dots & \rho_{1L}\,\sigma\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}\,\sigma & \rho_{L1}\,\sigma & \dots & \rho_{LL}\,\sigma\\ \end{bmatrix}_{N^2 \times N^2}.

We keep in mind that we want to find an operator UU such that

trB[USU]=ρ.\tr_B\left[USU^\dagger\right] = \rho'.

We find UU using the following steps:

  1. First we simplify the right-hand side of (67) and find a simplified form for the desired state ρ\rho'.

  2. Next, we simplify the left-hand side of the equation by choosing an ansatz for UU and posing several assumptions on the form of it.

  3. In the end, we equate both sides of the equation up to the first-order approximation to find UU in these assumption conditions.

We also note that in the current steps of the derivations, we are assuming that the phase coefficient we would like to apply (Δ) is small and we use this assumption in the calculations.

Simplifying ρ\rho'

Let us investigate the expected output state ρ\rho' in more detail. We had

ρ=γ,μργμeiΔ(σγγσμμ)γμ,\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} e^{i\Delta (\sigma_{\gamma\gamma} - \sigma_{\mu\mu})} \ket{\gamma}\bra{\mu},

where Δ is a small number such that Δ20\Delta^2 \approx 0. This assumption means that we can write

eiΔ(σγγσμμ)1+iΔ(σγγσμμ).e^{i\Delta(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})} \approx 1 + i\Delta(\sigma_{\gamma\gamma}-\sigma_{\mu\mu}).

Therefore we have

ργ,μργμ(1+iΔ(σγγσμμ))γμ=ρ+iΔγ,μργμ(σγγσμμ)γμ.\begin{align} \rho' &\approx \sum_{\gamma,\mu} \rho_{\gamma\mu} \Big( 1 + i\Delta(\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \Big) \ket{\gamma}\bra{\mu}\\ \nonumber &= \rho + i\Delta \sum_{\gamma,\mu} \rho_{\gamma\mu} (\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \ket{\gamma}\bra{\mu}. \end{align}

We define the matrix

G=γ,μργμ(σγγσμμ)γμ,G = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, (\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \, \ket{\gamma}\bra{\mu},

which allows us to write

ρρ+iΔG,\rho' \approx \rho + i\Delta G,

where the approximation is correct up to the first order in Δ. With this, we have simplified the term on the right-hand side of (67). Next, we have to do similarly for the term trB[USU]\tr_B\left[USU^\dagger\right] on the left-hand side.

Before continuing the derivations, let us have a look at the matrix representation GG in the computational basis.

G=[0ρ01(σ00σ11)ρ0L(σ00σLL)ρ10(σ11σ00)0ρ1L(σ11σLL)ρL0(σLLσ00)ρL1(σLLσ11)0]N×N.G = \begin{bmatrix} 0 & \rho_{01}(\sigma_{00}-\sigma_{11}) & \dots & \rho_{0L}(\sigma_{00}-\sigma_{LL})\\ \rho_{10}(\sigma_{11}-\sigma_{00}) & 0 & \dots & \rho_{1L}(\sigma_{11}-\sigma_{LL})\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}(\sigma_{LL}-\sigma_{00}) & \rho_{L1}(\sigma_{LL}-\sigma_{11}) & \dots & 0 \\ \end{bmatrix}_{N \times N}.

Simplifying trB[USU]\tr_B\left[USU^\dagger\right]

In order to simplify the term trB[USU]\tr_B\left[USU^\dagger\right], we take the following steps

  1. First, we suggest an ansatz for UU.

  2. Next, we simplify the term USUUSU^\dagger using the ansatz and the assumption of Δ20\Delta^2 \approx 0.

  3. Finally, we trace over the secondary register by calculating trB[USU]\tr_B\left[USU^\dagger\right]. We will end up with a simplified approximated term which we can use to equate with (72) to find UU.

Let us start with the ansatz. We propose UU to have the form of an exponential of a Hermitian operator AA as

U=eiΔ2A,U = e^{i \frac{\Delta}{2} A},

where AA is Hermitian (A=AA^\dagger=A) and has the property of A2=IA^2=I. The motivation for using this ansatz comes from deriving the solution for special cases such as when the registers only have one qubit (n=1n=1). Finding the exact solution for these simpler cases is more straightforward and will lead to guessing such ansatz for the general case since the special case solutions also have such properties. We will see how using this form will simplify the derivations and will enable us to find an AA operator that satisfies our conditions.

We note that using these assumptions, we have

U=eiΔ2A=cos(Δ2)I+isin(Δ2)A.U = e^{i \frac{\Delta}{2} A} = \cos(\frac{\Delta}{2}) I + i \sin(\frac{\Delta}{2}) A.

In order to simplify UU, we use the assumption of small Δ and write

U=eiΔ2A=I+iΔ2A+O(Δ2)I+iΔ2A,\begin{align} U = e^{i \frac{\Delta}{2} A} &= I + i \frac{\Delta}{2} A + \mathcal{O}(\Delta^2)\\ &\approx I + i \frac{\Delta}{2} A, \end{align}

and similarly,

UIiΔ2A.U^\dagger \approx I - i \frac{\Delta}{2} A.

We can then use this simplified form of UU to write

USU=U(ρσ)U=ρσ+iΔ2[A(ρσ)(ρσ)A]+O(Δ2)ρσ+iΔ2[A(ρσ)(ρσ)A]=ρσ+iΔ2[A,ρσ],\begin{align} USU^\dagger = U(\rho\otimes\sigma)U^\dagger &= \rho\otimes\sigma + i \frac{\Delta}{2} \left[ A (\rho\otimes\sigma) - (\rho\otimes\sigma) A \right] + \mathcal{O}(\Delta^2)\\ \nonumber &\approx \rho\otimes\sigma + i \frac{\Delta}{2} \left[ A (\rho\otimes\sigma) - (\rho\otimes\sigma) A \right]\\ \nonumber &= \rho\otimes\sigma + i \frac{\Delta}{2} \left[ A,\, \rho\otimes\sigma \right], \end{align}

where the approximation is correct up to the first order in Δ.

At this point, we have simplified the term USUUSU^\dagger using the ansatz and the Δ approximation. We have in mind that we wanted to find UU such that it satisfies (67). This means that we need to trace over the second register in (78) and find AA such that the traced-over density matrix is equal to the density matrix in (72).

Tracing over the secondary register, we have

trBU(ρσ)Uρ+iΔ2M,\tr_B{ U(\rho\otimes\sigma)U^\dagger } \approx \rho + i \frac{\Delta}{2} M,

where MM is defined as

M=trB[A(ρσ)(ρσ)A]=trB[A,ρσ].M = \tr_B \left[ A (\rho\otimes\sigma) - (\rho\otimes\sigma) A \right] = \tr_B \left[ A,\, \rho\otimes\sigma \right].

It is not straightforward to further simplify this equation unless we propose several assumptions for AA. We have in mind that both AA and UU are N2×N2N^2\times N^2 matrices that act on a memory consisting of two nn-qubit registers, namely S=ρσS=\rho\otimes\sigma. This means that we can represent AA as a block matrix of the form

A=[A00A01A0LA10A11A1LAL0AL1ALL]N2×N2A = \begin{bmatrix} A_{00} & A_{01} & \dots & A_{0L}\\ A_{10} & A_{11} & \dots & A_{1L}\\ \vdots & \vdots & \ddots & \vdots\\ A_{L0} & A_{L1} & \dots & A_{LL}\\ \end{bmatrix}_{N^2 \times N^2}

where each AγμA_{\gamma\mu} is a matrix with the dimension of N×NN\times N.

For the first simplifying assumption, we choose AA to be block diagonal which means that only the block matrices at the diagonal locations AγγA_{\gamma\gamma} are non-zero. Applying this assumption, AA will have the form

A=[A00000A11000ALL]N2×N2A = \begin{bmatrix} A_{00} & 0 & \dots & 0\\ 0 & A_{11} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & A_{LL}\\ \end{bmatrix}_{N^2 \times N^2}

This will allow us to calculate the terms trB[A(ρσ)]\tr_B \left[ A (\rho\otimes\sigma) \right] and trB[(ρσ)A]\tr_B \left[ (\rho\otimes\sigma) A \right] in (80). Before calculating the terms, we note that because of the Hermiticity of AA and the property of A2=IA^2=I, the block diagonal form we imposed on AA will result in the following relations for the sub-matrices AγγA_{\gamma\gamma}.

Aγγ=Aγγ,A_{\gamma\gamma}^\dagger = A_{\gamma\gamma},

which means AγγA_{\gamma\gamma} are Hermitian and

Aγγ2=I.A_{\gamma\gamma}^2 = I.

We can describe the block diagonal form of AA algebraically as

A=γγγAγγ.A = \sum_{\gamma} \ket{\gamma}\bra{\gamma} \otimes A_{\gamma\gamma}.

Using this equation, we can calculate

A(ρσ)=γ,μ(ργμγμ)(Aγγσ).A (\rho\otimes\sigma) = \sum_{\gamma,\mu} \Big( \rho_{\gamma\mu} \ket{\gamma}\bra{\mu} \Big) \otimes \Big( A_{\gamma\gamma}\sigma \Big).

The block matrix representation of this term is

A(ρσ)=[ρ00A00σρ01A00σρ0LA00σρ10A11σρ11A11σρ1LA11σρL0ALLσρL1ALLσρLLALLσ]N2×N2A (\rho\otimes\sigma) = \begin{bmatrix} \rho_{00}\,A_{00}\sigma & \rho_{01}\,A_{00}\sigma & \dots & \rho_{0L}\,A_{00}\sigma\\ \rho_{10}\,A_{11}\sigma & \rho_{11}\,A_{11}\sigma & \dots & \rho_{1L}\,A_{11}\sigma\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}\,A_{LL}\sigma & \rho_{L1}\,A_{LL}\sigma & \dots & \rho_{LL}\,A_{LL}\sigma\\ \end{bmatrix}_{N^2 \times N^2}

Tracing over the second register, we will have

trB[A(ρσ)]=[ρ00A00σρ01A00σρ0LA00σρ10A11σρ11A11σρ1LA11σρL0ALLσρL1ALLσρLLALLσ]N×N\tr_B \left[ A (\rho\otimes\sigma) \right] = \begin{bmatrix} \rho_{00}\,\langle A_{00}\rangle_\sigma & \rho_{01}\,\langle A_{00}\rangle_\sigma & \dots & \rho_{0L}\,\langle A_{00}\rangle_\sigma\\ \rho_{10}\,\langle A_{11}\rangle_\sigma & \rho_{11}\,\langle A_{11}\rangle_\sigma & \dots & \rho_{1L}\,\langle A_{11}\rangle_\sigma\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}\,\langle A_{LL}\rangle_\sigma & \rho_{L1}\,\langle A_{LL}\rangle_\sigma & \dots & \rho_{LL}\,\langle A_{LL}\rangle_\sigma\\ \end{bmatrix}_{N \times N}

where Aγγσ\langle A_{\gamma\gamma}\rangle_\sigma is defined as Aγγσ=tr(Aγγσ)\langle A_{\gamma\gamma}\rangle_\sigma = \tr(A_{\gamma\gamma}\sigma) is the expected value of the AγγA_{\gamma\gamma} operator on the state σ.

And we can also write the algebraic equation for the trace operation as

trB[A(ρσ)]=γ,μργμtr(Aγγσ)γμ=γ,μργμAγγσγμ.\begin{align} \tr_B \left[ A (\rho\otimes\sigma) \right] &= \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \tr(A_{\gamma\gamma}\sigma) \,\ket{\gamma}\bra{\mu} \\ &= \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \langle A_{\gamma\gamma}\rangle_\sigma \,\ket{\gamma}\bra{\mu}. \end{align}

To better understand this equation, we can compare it to the initial state

ρ=γ,μργμγμ,\rho = \sum_{\gamma,\mu} \rho_{\gamma\mu} \,\ket{\gamma}\bra{\mu},

and see that the density matrix elements are modified by factors of Aγγσ\langle A_{\gamma\gamma}\rangle_\sigma based on the state of the secondary register σ. We will use this feature to apply the desired phase to the initial state ρ.

In the next step, we have to similarly calculate the term trB[(ρσ)A]\tr_B \left[ (\rho\otimes\sigma) A \right] to completely simplify the matrix MM in (80).

We again write the block matrix AA as

A=μμμAμμ.A = \sum_{\mu} \ket{\mu}\bra{\mu} \otimes A_{\mu\mu}.

This allows us to calculate

(ρσ)A=γ,μ(ργμγμ)(σAμμ).(\rho\otimes\sigma) A = \sum_{\gamma,\mu} \Big( \rho_{\gamma\mu} \ket{\gamma}\bra{\mu} \Big) \otimes \Big( \sigma A_{\mu\mu} \Big).

When we take the trace over the second register, we can write

trB[(ρσ)A]=γ,μργμtr(Aμμσ)γμ=γ,μργμAμμσγμ.\begin{align} \tr_B \left[ (\rho\otimes\sigma) A \right] &= \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \tr(A_{\mu\mu}\sigma) \,\ket{\gamma}\bra{\mu} \\ &= \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \langle A_{\mu\mu}\rangle_\sigma \,\ket{\gamma}\bra{\mu}. \end{align}

If we have a look at the second term trB[(ρσ)A]\tr_B \left[ (\rho\otimes\sigma) A \right] and compare it to the first term from (88), we can see that the expected values Aμμσ\langle A_{\mu\mu}\rangle_\sigma appear at different locations

trB[(ρσ)A]=[ρ00A00σρ01A11σρ0LALLσρ10A00σρ11A11σρ1LALLσρL0A00σρL1A11σρLLALLσ]N×N\tr_B \left[ (\rho\otimes\sigma) A \right] = \begin{bmatrix} \rho_{00}\,\langle A_{00}\rangle_\sigma & \rho_{01}\,\langle A_{11}\rangle_\sigma & \dots & \rho_{0L}\,\langle A_{LL}\rangle_\sigma\\ \rho_{10}\,\langle A_{00}\rangle_\sigma & \rho_{11}\,\langle A_{11}\rangle_\sigma & \dots & \rho_{1L}\,\langle A_{LL}\rangle_\sigma\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}\,\langle A_{00}\rangle_\sigma & \rho_{L1}\,\langle A_{11}\rangle_\sigma & \dots & \rho_{LL}\,\langle A_{LL}\rangle_\sigma\\ \end{bmatrix}_{N \times N}

Having the individual terms calculated, we can simplify the matrix MM in (80) as

M=trB[A(ρσ)(ρσ)A]=γ,μργμ(AγγσAμμσ)γμ,\begin{align} M &= \tr_B \left[ A (\rho\otimes\sigma) - (\rho\otimes\sigma) A \right] \\ \nonumber &= \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \Big( \langle A_{\gamma\gamma}\rangle_\sigma - \langle A_{\mu\mu}\rangle_\sigma \Big) \,\ket{\gamma}\bra{\mu}, \end{align}

which will have the matrix form of

M=[0ρ01(A00σA11σ)ρ0L(A00σALLσ)ρ10(A11σA00σ)0ρ1L(A11σALLσ)ρL0(ALLσA00σ)ρL1(ALLσA11σ)0]N×NM = \begin{bmatrix} 0 & \rho_{01}\,\big(\langle A_{00}\rangle_\sigma - \langle A_{11}\rangle_\sigma \big) & \dots & \rho_{0L}\,\big(\langle A_{00}\rangle_\sigma - \langle A_{LL}\rangle_\sigma \big)\\ \rho_{10}\,\big(\langle A_{11}\rangle_\sigma - \langle A_{00}\rangle_\sigma \big) & 0 & \dots & \rho_{1L}\,\big(\langle A_{11}\rangle_\sigma - \langle A_{LL}\rangle_\sigma \big)\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{L0}\,\big(\langle A_{LL}\rangle_\sigma - \langle A_{00}\rangle_\sigma \big) & \rho_{L1}\,\big(\langle A_{LL}\rangle_\sigma - \langle A_{11}\rangle_\sigma \big) & \dots & 0\\ \end{bmatrix}_{N \times N}

Now that we have calculated the MM matrix, we have finally simplified the term trB[USU]\tr_B\left[USU^\dagger\right] on the left-hand side of (67). We had previously showed in (79) that

trB[USU]ρ+iΔ2M.\tr_B\left[USU^\dagger\right] \approx \rho + i \frac{\Delta}{2} M.

We can see that the matrix MM has a form very similar to the previously calculated matrix GG ((73)) on the right-hand side of (67).

Next, we have to use the results of these derivations to construct the operator AA and thus find the partial phase operator UU.

Finding AA

To review the previous derivations, we wanted to simplify the equation (67) as

trB[USU]=ρ\tr_B\left[USU^\dagger\right] = \rho'

by calculating the terms on both sides of the equation using our imposed conditions and approximations.

The results of the calculations from the previous section were, for the right-hand side,

ρ=ρ+iΔG+O(Δ2),\rho' = \rho + i\Delta G + \mathcal{O}(\Delta^2),

and for the left-hand side,

trB[USU]=ρ+iΔ2M+O(Δ2),\tr_B\left[USU^\dagger\right] = \rho + i \frac{\Delta}{2} M + \mathcal{O}(\Delta^2),

where GG and MM were

G=γ,μργμ(σγγσμμ)γμ,G = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, (\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \, \ket{\gamma}\bra{\mu},

and

M=γ,μργμ(AγγσAμμσ)γμ.M = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \Big( \langle A_{\gamma\gamma}\rangle_\sigma - \langle A_{\mu\mu}\rangle_\sigma \Big) \,\ket{\gamma}\bra{\mu}.

Our goal in this step is to find the AγγA_{\gamma\gamma} operators such that the equality in (98) holds. When we find these operators, we will be able to construct the operator AA, which in turn means that we have derived the partial phase operator

U=eiΔ2A.U = e^{i\frac{\Delta}{2}A}.

We had assumed Δ is small so that we can neglect Δ2\Delta^2 and higher powers of Δ. This means we only equate both sides of (98) up to the first-order approximation.

In order for (98) to hold, we must have

M=2G,M = 2G,

Also from the equations for the operators GG and MM, the above equation for the matrices corresponds to the equation

12(AγγσAμμσ)=σγγσμμ\frac{1}{2} \Big(\langle A_{\gamma\gamma}\rangle_\sigma - \langle A_{\mu\mu}\rangle_\sigma \Big) = \sigma_{\gamma\gamma} - \sigma_{\mu\mu}

for all indices γ and μ. Now, we have to find AγγA_{\gamma\gamma} such that this equation holds.

In general, every operator AγγA_{\gamma\gamma} can be represented as an N×NN\times N Hermitian matrix. We also assumed they have the property of Aγγ2=IA_{\gamma\gamma}^2 = I. We can label the elements of the matrix AγγA_{\gamma\gamma} in the ii-th row and the jj-th column as aijγa^\gamma_{ij} and write the matrix down as

Aγγ=[a00γa01γa0Lγa10γa11γa1LγaL0γaL1γaLLγ]N×N,A_{\gamma\gamma} = \begin{bmatrix} a^\gamma_{00} & a^\gamma_{01} & \dots & a^\gamma_{0L}\\ a^\gamma_{10} & a^\gamma_{11} & \dots & a^\gamma_{1L}\\ \vdots & \vdots & \ddots & \vdots\\ a^\gamma_{L0} & a^\gamma_{L1} & \dots & a^\gamma_{LL}\\ \end{bmatrix}_{N \times N},

noting that the γ in the terms aijγa^\gamma_{ij} is not an exponent and it indicates that the element corresponds to the matrix AγγA_{\gamma\gamma}.

We can take the first step for finding the elements aijγa^\gamma_{ij} by realizing that in (105), there are no off-diagonal elements of the input state σ and only diagonal elements σγγ\sigma_{\gamma\gamma} appear in the equation.

This means that we can assume all the non-diagonal elements of AγγA_{\gamma\gamma} are zero

aijγ=0,ij,a^\gamma_{ij} = 0, \quad i \ne j,

so that no σγμ\sigma_{\gamma\mu} term (where γμ\gamma\ne\mu) will appear in the expected values Aγγσ=tr(Aγγσ)\langle A_{\gamma\gamma}\rangle_\sigma= \tr(A_{\gamma\gamma}\sigma).

This means that we can simplify AγγA_{\gamma\gamma} as

Aγγ=[a00γ000a11γ000aLLγ]N×N.A_{\gamma\gamma} = \begin{bmatrix} a^\gamma_{00} & 0 & \dots & 0\\ 0 & a^\gamma_{11} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & a^\gamma_{LL}\\ \end{bmatrix}_{N \times N}.

This allows us to write

Aγγσ=tr(Aγγσ)=jajjγσjj.\langle A_{\gamma\gamma}\rangle_\sigma= \tr(A_{\gamma\gamma}\sigma) = \sum_{j} a^\gamma_{jj}\sigma_{jj}.

In the next step of finding the AγγA_{\gamma\gamma} matrices, we should recall that they were Hermitian and had the property of Aγγ2=IA_{\gamma\gamma}^2=I. The Hermiticity forces the diagonal elements ajjγa^\gamma_{jj} to be real. And the Aγγ2=IA_{\gamma\gamma}^2=I property means that we must have

(ajjγ)2=1,\left( a^\gamma_{jj} \right)^2 = 1,

for all indices jj.

The relation in (110) does not allow the diagonal elements ajjγa^\gamma_{jj} to be zero. Therefore, we cannot choose them as such, to have Aγγσ=2σγγ\langle A_{\gamma\gamma}\rangle_\sigma = 2 \sigma_{\gamma\gamma} which would satisfy (105) straightforwardly. Instead, the elements ajjγa^\gamma_{jj} can only be 1 or -1 because they should be real and satisfy (110).

One way to satisfy (105) and fulfill the above conditions is to choose the elements ajjγa^\gamma_{jj} to be equal to -1 in all jj-th diagonal locations except for the location j=γj=\gamma where we choose the element to be 1. In other words, we choose the elements as

ajjγ={1,j=γ1,jγa^\gamma_{jj} = \begin{cases} 1, & j = \gamma \\ -1, & j \ne \gamma \end{cases}

For example, the matrix representations of A00A_{00} and A11A_{11} will be

A00=[100010001]N×NA11=[100010001]N×NA_{00} = \begin{bmatrix} 1 & 0 & \dots & 0\\ 0 & -1 & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & -1\\ \end{bmatrix}_{N \times N} \qquad A_{11} = \begin{bmatrix} -1 & 0 & \dots & 0\\ 0 & 1 & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & -1\\ \end{bmatrix}_{N \times N}

We can see that with this assumption, we will have

Aγγσ=σ00σ11++σγγ+σN1,N1=2σγγ1,\langle A_{\gamma\gamma}\rangle_\sigma = - \sigma_{00} - \sigma_{11} + \dots + \sigma_{\gamma\gamma} + \dots - \sigma_{N-1, N-1} = 2 \sigma_{\gamma\gamma} - 1,

where for the last equality we used the fact that σ is a density matrix therefore the sum of its diagonal elements is equal to 1

tr(σ)=μσμμ=1\tr(\sigma) = \sum_{\mu} \sigma_{\mu\mu} = 1

We can immediately check that this solution satisfies (105) as

12(AγγσAμμσ)=σγγσμμ\frac{1}{2} \Big(\langle A_{\gamma\gamma}\rangle_\sigma - \langle A_{\mu\mu}\rangle_\sigma \Big) = \sigma_{\gamma\gamma} - \sigma_{\mu\mu} \nonumber

With this, the derivation of the AA operator is complete and we will move on to calculating the partial phase operator UU; however, let us first briefly review a summary of the derivations so far to prepare for the following sections.

Summary of the results of deriving AA

We wanted to find the UU operator such that

trB[USU]=ρ.\tr_B\left[USU^\dagger\right] = \rho'.

We proposed the UU operator to have the form of

U=eiΔ2A.U = e^{i \frac{\Delta}{2} A}.

Furthermore, we assumed Δ is small and solved (116) for AA using the above UU operator.

This resulted in constructing AA as a diagonal matrix with the block-diagonal form of

A=[A00000A11000AN1,N1]N2×N2,A = \begin{bmatrix} A_{00} & 0 & \dots & 0\\ 0 & A_{11} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & A_{N-1,N-1}\\ \end{bmatrix}_{N^2 \times N^2},

where AγγA_{\gamma\gamma} are N×NN \times N diagonal matrices with -1 on all diagonal locations except for the γ-th location where the diagonal element is 1.

A00=[100010001]N×NA11=[100010001]N×NA_{00} = \begin{bmatrix} 1 & 0 & \dots & 0\\ 0 & -1 & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & -1\\ \end{bmatrix}_{N \times N} \qquad A_{11} = \begin{bmatrix} -1 & 0 & \dots & 0\\ 0 & 1 & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & -1\\ \end{bmatrix}_{N \times N}

Because we constructed AA with the condition to satisfy (116), the UU operator corresponding to AA is the partial phase operator that we were searching for, meaning that it transforms

ρ=γ,μργμγμ\rho = \sum_{\gamma,\mu} \, \rho_{\gamma\mu} \, \ket{\gamma}\bra{\mu}

to

ρ=γ,μργμeiΔ(σγγσμμ)γμ,\rho' = \sum_{\gamma,\mu} \, \rho_{\gamma\mu} \,e^{i\Delta(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})} \, \ket{\gamma}\bra{\mu},

for small Δ.

However, it is still useful to find the UU operator itself based on these results and we have to show that it is possible to efficiently implement it in the next sections.

Deriving UU

Now that we have derived the AA operator, we should derive an explicit form for UU. In this section, we put the AA operator derived in the previous sections in the definition of UU to find it explicitly.

We recall that UU was given as

U=eiΔ2A.U = e^{i \frac{\Delta}{2} A}.

As we will see, UU corresponds to a simple operator. In order to find UU, we first recall that AA had the property of A2=IA^2=I. Using this property, we can expand UU as

U=eiΔ2A=cos(Δ2)I+isin(Δ2)A.U = e^{i \frac{\Delta}{2} A} = \cos(\frac{\Delta}{2}) I + i \sin(\frac{\Delta}{2}) A.

We can see from (123) that because II and AA both have diagonal forms in the computational basis, the UU operator will also have a diagonal form as

U=[U00000U11000UN1,N1]N2×N2,U = \begin{bmatrix} U_{00} & 0 & \dots & 0\\ 0 & U_{11} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & U_{N-1,N-1}\\ \end{bmatrix}_{N^2 \times N^2},

where UγγU_{\gamma\gamma} are (N×N)(N\times N)-dimensional and diagonal matrices.

The diagonal elements of the matrix AA are either 1 or -1 and the diagonal elements of II are all equal to 1. Thus, using (123), we realize that the diagonal elements of the sub-matrices UγγU_{\gamma\gamma} will be either

eiΔ2=cos(Δ2)+isin(Δ2)e^{i\frac{\Delta}{2}} = \cos(\frac{\Delta}{2}) + i \sin(\frac{\Delta}{2})

or

eiΔ2=cos(Δ2)isin(Δ2),e^{-i\frac{\Delta}{2}} = \cos(\frac{\Delta}{2}) - i \sin(\frac{\Delta}{2}),

based on the diagonal elements of the AγγA_{\gamma\gamma} matrices.

Using this correspondence, we can see that the form of the matrix UU is identical to AA with the difference that every diagonal element 1 in the operator AA transforms to eiΔ2e^{i\frac{\Delta}{2}} in UU, and every -1 transforms to eiΔ2e^{-i\frac{\Delta}{2}}.

We can for example write the matrix representations of U00U_{00} and U11U_{11} as

U00=[eiΔ2000eiΔ2000eiΔ2]N×NU11=[eiΔ2000eiΔ2000eiΔ2]N×NU_{00} = \begin{bmatrix} e^{i\frac{\Delta}{2}} & 0 & \dots & 0\\ 0 & e^{-i\frac{\Delta}{2}} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & e^{-i\frac{\Delta}{2}}\\ \end{bmatrix}_{N \times N} \qquad U_{11} = \begin{bmatrix} e^{-i\frac{\Delta}{2}} & 0 & \dots & 0\\ 0 & e^{i\frac{\Delta}{2}} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & e^{-i\frac{\Delta}{2}}\\ \end{bmatrix}_{N \times N}

With this, we have derived the explicit form of the operator UU.

This section marks the end of deriving the partial phase operator UU. In the upcoming section, we will find an implementation of this operator for quantum computers. Later, we will show how we can use this operator iteratively to overcome the limitation of the small phase coefficient Δ as one of our assumptions in the current derivations.

5.2The implementation of the partial phase operator UU

Previously, we derived a simple form for the operator UU, but we still have not shown how it can actually be implemented using standard quantum gates. We will now find an implementation by analyzing the action of UU on the basis states and trying to find quantum gates that perform the corresponding action. To do this analysis, we must revisit our definitions of the computational basis states of our quantum memory and their correspondence to the pattern of the eigenvalues of UU.

We have defined our quantum memory as a combination of a primary register in the first position and a secondary register in the second. Each register has nn qubits and we can define the basis states of the memory as a whole as

γμ,\ket{\gamma}\ket{\mu},

where γ and μ are respectively the basis indices of the primary and secondary registers. Each register has N=2nN=2^n basis states therefore the whole memory will have N2N^2 basis states.

We showed that UU has a diagonal form in the computational basis ((124) and (127)). This means that all the computational basis states are eigenstates of UU

Uγμ=λγμγμ.U\ket{\gamma}\ket{\mu} = \lambda_{\gamma\mu} \ket{\gamma}\ket{\mu}.

We also derived that the diagonal elements of UU are either eiΔ2e^{i\frac{\Delta}{2}} or eiΔ2e^{-i\frac{\Delta}{2}}. It means that the eigenvalues λγμ\lambda_{\gamma\mu} in (129) are correspondingly either eiΔ2e^{i\frac{\Delta}{2}} or eiΔ2e^{-i\frac{\Delta}{2}}. Our goal is to derive an explicit equation for the eigenvalues λγμ\lambda_{\gamma\mu}.

We can start by writing the algebraic form of UU from (124) as

U=γγγUγγ.U = \sum_\gamma \ket{\gamma}\bra{\gamma} \otimes U_{\gamma\gamma}.

Now applying the operator on the basis states, we will have

Uγμ=γUγγμ.U\ket{\gamma}\ket{\mu} = \ket{\gamma} \otimes U_{\gamma\gamma} \ket{\mu}.

Now we recall that UγγU_{\gamma\gamma} was a diagonal N×NN\times N matrix with eiΔ2e^{-i\frac{\Delta}{2}} as all of its diagonal elements except for the γ-th diagonal element where the value was eiΔ2e^{i\frac{\Delta}{2}} ((127)). And considering the representation of the computational basis states μ\ket{\mu} as

0=[100],1=[010],,N1=[001],\ket{0} = \begin{bmatrix} 1\\ 0\\ \vdots\\ 0 \end{bmatrix} ,\qquad \ket{1} = \begin{bmatrix} 0\\ 1\\ \vdots\\ 0 \end{bmatrix} ,\quad\dots\quad,\quad \ket{N-1} = \begin{bmatrix} 0\\ 0\\ \vdots\\ 1 \end{bmatrix},

the term UγγμU_{\gamma\gamma} \ket{\mu} is evaluated as

Uγγμ={eiΔ2μ,for γ=μeiΔ2μ,for γμ.U_{\gamma\gamma} \ket{\mu} = \begin{cases} e^{i\frac{\Delta}{2}} \ket{\mu}, & \text{for } \gamma=\mu \\ e^{-i\frac{\Delta}{2}} \ket{\mu}, & \text{for } \gamma \ne \mu \end{cases}.

This means that we can write

Uγμ={eiΔ2γμ,for γ=μeiΔ2γμ,for γμ=eiΔ2{eiΔγμ,for γ=μγμ,for γμ\begin{align} U\ket{\gamma}\ket{\mu} &= \begin{cases} e^{i\frac{\Delta}{2}} \ket{\gamma}\ket{\mu}, & \text{for } \gamma=\mu \\ e^{-i\frac{\Delta}{2}} \ket{\gamma}\ket{\mu}, & \text{for } \gamma \ne \mu \end{cases} \\ &= e^{-i\frac{\Delta}{2}} \begin{cases} e^{i\Delta} \ket{\gamma}\ket{\mu}, & \text{for } \gamma=\mu \\ \ket{\gamma}\ket{\mu}, & \text{for } \gamma \ne \mu \end{cases} \end{align}

In other words, given the basis states of the two registers, γ\ket{\gamma} and μ\ket{\mu}, the UU operator (ignoring the overall phase) applies a Δ phase if the two registers are in the same basis state and does nothing otherwise.

Now that we have understood the action of the UU operator on the basis states, we shall search for an efficient implementation of it using standard quantum gates.

As previously derived, the UU operator applies a phase with the condition that both of the registers are in the same basis state. We write this condition as

γ=μ.\gamma = \mu.

However, we must note that each basis state γ\ket{\gamma} actually consists of nn qubits as

γ=γn1γ1γ0,\ket{\gamma} = \ket{\gamma_{n-1}}\dots\ket{\gamma_1}\ket{\gamma_0},

where each γi\ket{\gamma_i} can be in one of its basis states 0\ket{0} or 1\ket{1}.

This means that the equality of the basis indices γ and μ is one-to-one related to the equality of the individual qubits.

γ=μγi=μi for all i\gamma = \mu \quad\Longleftrightarrow\quad \gamma_i = \mu_i \quad \text{ for all } i

Having decomposed the basis states of the registers to their qubits, let us derive an efficient implementation of the operator UU acting on individual qubits.

We can start by defining the equality flag bits for each corresponding qubits as

zi={1,γi=μi0,γiμi.z_i = \begin{cases} 1, & \gamma_i = \mu_i \\ 0, & \gamma_i \ne \mu_i \\ \end{cases}.

The ziz_i flags show whether the ii-th qubits of the two registers (primary and secondary) are in the same state or not.

We need to first calculate these flags in our quantum computer and then apply the phase Δ to the registers if all of the flag bits are equal to 1. It is important to note that the operations should be done using quantum unitary gates without involving any measurement because measurement will force the wavefunction to collapse and therefore the initial state will be lost.

In our implementation, we temporarily use one of the registers to store the calculated flag bits and recover the original state after the operation. In this way, we will not need to introduce an ancilla register to store the flags.

We first recognize that the ziz_i flags can be written as a logical function as

zi=γiμi,z_i = \gamma_i \oplus \overline{\mu_i},

where \oplus is the XOR operator and the bar over μi\mu_i is the logical NOT operator. As we will explain in what follows, the flag bits can be computed using CNOT gates on a quantum computer using this logical function.

The standard CNOT gate is defined as acting on two qubits called control and target qubits, and it flips the target qubit only if the control qubit is in the 1\ket{1} state. This action can be written as

CNOTxy=xyx,\text{CNOT}\ket{x}\ket{y} = \ket{x}\ket{y \oplus x},

where x\ket{x} is the control and y\ket{y} is the target qubit, and xx and yy can be zero or one x,y{0,1}x,y\in\{0,1\}. Note that y1=yy\oplus 1 = \overline{y}.

In this case, the CNOT gate performs a not operation if the control qubit is 1\ket{1}. We can denote this type of CNOT operator as CNOT1\text{CNOT}_1 in order to contrast it with the other type we will introduce shortly. We have shown this operator in the circuit representation in Figure 2 where the dot on the x\ket{x} state indicates that it is the control qubit.

The CNOT operator with 1 as its control state (\text{CNOT}_1)

Figure 2:The CNOT operator with 1 as its control state (CNOT1\text{CNOT}_1)

There is also a second type of CNOT operator that flips the target qubit if the control qubit is in the state 0\ket{0}. We can denote this operator as CNOT0\text{CNOT}_0 with the following definition

CNOT0xy=xyx.\text{CNOT}_0\ket{x}\ket{y} = \ket{x}\ket{y \oplus \overline{x}}.

This operator is drawn in Figure 3, where the non-filled dot indicates that the control state is 0.

The CNOT operator with 0 as its control state (\text{CNOT}_0)

Figure 3:The CNOT operator with 0 as its control state (CNOT0\text{CNOT}_0)

It is very straightforward to show that the two types of CNOT operators correspond to each other by applying Pauli XX operators on the control qubit as in Figure 4.

The equivalence of the CNOT operators with different control states

Figure 4:The equivalence of the CNOT operators with different control states

After introducing the CNOT0\text{CNOT}_0 operator, we can see that it exactly performs the transformation we need to calculate the ziz_i flag bits from (139). Each flag bit can be calculated in the qubits of the primary memory as

CNOT0μiγi=μiγiμi=μizi.\text{CNOT}_0 \ket{\mu_i}\ket{\gamma_i} = \ket{\mu_i}\ket{\gamma_i \oplus \overline{\mu_i}} = \ket{\mu_i}\ket{z_i}.

The calculation of the flags is demonstrated in Figure 5 for the case of 3-qubit registers, where ψ\ket{\psi} and ϕ\ket{\phi} are respectively the primary and secondary registers.

The preliminary flag bit calculation for the partial phase operator, for the case of n=3

Figure 5:The preliminary flag bit calculation for the partial phase operator, for the case of n=3n=3

Now, we only have to apply a phase term eiΔe^{i\Delta} to the state if all the zi\ket{z_i} qubits are in the 1\ket{1} state. This is exactly the action of the controlled phase operators with multiple control qubits; they apply a phase if all the input qubits are in the state 1\ket{1}. We just need to apply this multi-qubit phase operator with the phase value of Δ on all of the calculated flag qubits. This is drawn in Figure 6 for the case of the 3-qubit register example, where P(Δ)P(\Delta) is a phase operator with the phase value of Δ, defined as

P(Δ)=[100eiΔ].P(\Delta) = \begin{bmatrix} 1 & 0\\ 0 & e^{i\Delta}\\ \end{bmatrix}.
The circuit that applies the phase \Delta if the input registers are in the same basis state, for the case of n=3

Figure 6:The circuit that applies the phase Δ if the input registers are in the same basis state, for the case of n=3n=3

After this step, the desired phase Δ is applied to the memory if the input registers ψ\ket{\psi} and ϕ\ket{\phi} were in the same basis state. However, the circuit in Figure 6 is not the complete partial phase operator because the primary register was transformed by the flag bit calculator circuit. In order to recover the primary state, the only thing we have to do is to re-apply the same CNOT operators to un-compute the previous calculation. You can see the complete implementation of the partial phase operator UU in Figure 7.

The implementation of the partial phase operator U in the case of 3-qubit input registers

Figure 7:The implementation of the partial phase operator UU in the case of 3-qubit input registers

The final recovery of the primary state is possible because the phase gate does not modify the index of the state of the register and only applies a phase term to it. We also used the fact that for any logical bit xx, we have

xx=0x \oplus x = 0

and

x0=x.x \oplus 0 = x.

After the application of the phase operator, the state of each of the qubits of the primary register is γiμi\ket{\gamma_i\oplus\overline{\mu_i}}; therefore, using the definition of the CNOT0\text{CNOT}_0 operator, we have

CNOT0μiγiμi=μiγiμiμi=μiγi0=μiγi.\text{CNOT}_0 \ket{\mu_i}\ket{\gamma_i\oplus\overline{\mu_i}} = \ket{\mu_i}\ket{\gamma_i\oplus\overline{\mu_i}\oplus\overline{\mu_i}} = \ket{\mu_i}\ket{\gamma_i\oplus 0} = \ket{\mu_i}\ket{\gamma_i}.

This means that each CNOT operator will recover the original state of the corresponding qubit and by applying the operation on all the qubits, we will completely retrieve the initial state of the primary register γ\ket{\gamma}.

This section marks the end of deriving the partial phase operator. In the previous sections, we mathematically constructed the UU operator which applies the desired phase transformation on the input state, and in this section, we presented an implementation of that operator.

We can see that the gate complexity of the partial phase operator is

O(n)=O(logN),\mathcal{O}(n) = \mathcal{O}(\log N),

because it needs 2n2n CNOT gates and the phase operator can also be implemented using O(n)\mathcal{O}(n) gates Nielsen & Chuang (2010).

We also note that we did not use any ancilla qubit in this implementation of the UU operator, because we efficiently used the existing qubits to temporarily calculate and store the intermediary variables and we were able to recover the original qubits after the operations.

One very important feature of the UU operator is that it is agnostic about the input states therefore we do not need to change it to apply different phase profiles to the primary register. In this algorithm, not only the state that receives the phase is encoded in a quantum register as the input, but also the phase itself is stored as a part of the quantum algorithm in a secondary register.

Therefore, the algorithm reduces the problem of applying different phase profiles

f(γ)=Δϕ(γ)2f(\gamma) = \Delta \abs{\phi(\gamma)}^2

to a quantum register to the problem of being able to initialize a secondary register ϕ\ket{\phi} to the state that corresponds to a desired profile as

ϕ=γϕ(γ)γ.\ket{\phi} = \sum_\gamma \phi(\gamma) \ket{\gamma}.

After completing the derivation of the partial phase operator, we have to overcome the limitation introduced by assuming Δ20\Delta^2\approx 0. Throughout the derivations, we assumed the phase ff we want to apply is of the form

f(γ)=Δϕ(γ)2,f(\gamma) = \Delta \, \abs{\phi(\gamma)}^2,

where Δ is a small real number. But in general, we would like to apply any phase

f(γ)=αϕ(γ)2,f(\gamma) = \alpha \, \abs{\phi(\gamma)}^2,

for any real number α.

The goal in the next section is to use the partial phase operator to derive an algorithm that applies the phase profile with arbitrary phase coefficient α.

5.3The iterative phase algorithm

In the previous section, we derived an operator that (for small Δ) can transform

ρ=γ,μργμγμ\rho = \sum_{\gamma,\mu} \rho_{\gamma\mu} \ket{\gamma}\bra{\mu}

to

ρ=γ,μργμeiΔ(σγγσμμ)γμ.\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, e^{i\Delta (\sigma_{\gamma\gamma} - \sigma_{\mu\mu})} \ket{\gamma}\bra{\mu}.

However, our goal in this work was not to only apply small phase coefficients Δ but we wanted to apply arbitrary phase coefficients α as

ρ=γ,μργμeiα(σγγσμμ)γμ.\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, e^{i\alpha (\sigma_{\gamma\gamma} - \sigma_{\mu\mu})} \ket{\gamma}\bra{\mu}.

In order to achieve that, we propose an algorithm that iteratively applies the previous partial phase operator mm times to the primary memory to apply the phase with a desired α=mΔ\alpha = m\Delta phase coefficient. This will allow us to apply arbitrary phase coefficients α even though the phase step Δ should be small for the individual partial phase operators to work.

In this approach, we need multiple copies of the secondary register σ. Let us assume we have mm copies of σ. Note that having multiple copies of σ is possible and it will not contradict the no-cloning theorem because we completely know the σ state since we choose it to correspond to a specific and predetermined phase profile. Therefore, we can initialize quantum registers to the state σ with our initializers as many times as necessary.

In this section, we will show that if we repeat the partial phase operator mm times where each operator applies the phase step of Δ using the copies of σ, the overall effective phase coefficient of α=mΔ\alpha = m\Delta will be applied to the primary register. Furthermore, we calculate the error of this algorithm as a function of the parameters mm and Δ. As we will see, the error δ (defined as the trace distance of the output and the desired states) is

δ=O(mΔ2)=O(α2m).\delta = \mathcal{O}(m\Delta^2) = \mathcal{O}(\frac{\alpha^2}{m}).

This error analysis allows us to find out how many copies of σ we need to apply the desired phase coefficient α while achieving a desired final accuracy δ.

We will take the following steps for the derivations in this section

  1. We properly model the iterative algorithm, calculate the desired state and define the error in terms of the input parameters.

  2. Then, we go through the recursive calculations at each iteration of the algorithm and derive the output state of the algorithm as a function of ρ, σ, mm, and Δ.

  3. We end this section by analyzing the final error based on the calculations from the previous steps and discussing the implications of changing the input parameters on the final error.

Modeling and error proposal

Assume we have mm copies of the state σ. We start with our primary register ρ and the secondary registers σ which means we can describe the state of the initial memory as

ρσσσm=ρσm.\rho\otimes\underbrace{\sigma\otimes\sigma\otimes\dots\otimes\sigma}_{m} = \rho\otimes\sigma^{\otimes m}.

The algorithm applies the partial phase operator UU derived in the previous section using the primary register and one of the copies of σ as input states at each iteration step.

We model the iterations as follows. We regard the initial state of the primary register ρ as the 0-th step and denote it as ρ[0]\rho^{[0]}. This means before running the algorithm, we have

ρ[0]=ρ.\rho^{[0]} = \rho.

We use the index kk to indicate the value of a parameter after the kk-th iteration.

In the first iteration, we use one of the secondary states σ and apply the partial phase operator UU with the phase step of Δ as

F[1]=U(ρ[0]σ)U.F^{[1]} = U(\rho^{[0]}\otimes\sigma)U^\dagger.

After this step, similar to the original partial phase operator, we discard the secondary register to work with the new state of the primary register as

ρ[1]=trBF[1]=trB[U(ρ[0]σ)U].\rho^{[1]} = \tr_B F^{[1]} = \tr_B{\left[U(\rho^{[0]}\otimes\sigma)U^\dagger\right]}.

The state ρ[1]\rho^{[1]} is the state of the primary register after the first iteration. At the next iteration, we use the new state of the primary register ρ[1]\rho^{[1]} and another copy of σ. We repeat the partial phase operation using these states to calculate the state of the primary register after the second iteration as

ρ[2]=trB[U(ρ[1]σ)U].\rho^{[2]} = \tr_B{\left[U(\rho^{[1]}\otimes\sigma)U^\dagger\right]}.

Similarly, we repeat this operation mm times to calculate ρ[m]\rho^{[m]}. The state ρ[m]\rho^{[m]} is the final state at the output of the iterative phase algorithm. We note that at each iteration, we have the recursive formula of

ρ[k]=trB[U(ρ[k1]σ)U],\rho^{[k]} = \tr_B{\left[U(\rho^{[k-1]}\otimes\sigma)U^\dagger\right]},

where ρ[k]\rho^{[k]} is the state of the primary register after kk iterations. This is the formula we will evaluate in terms of the initial state ρ[0]\rho^{[0]} and the other parameters that will enable us to analyze the final error.

We define the error as the trace distance error between the final state ρ[m]\rho^{[m]} and the desired state ρ\rho'. The trace distance DD was defined in the introduction sections in (49). Thus, the error is given as

δ=12trρ[m]ρ.\delta = \frac{1}{2} \tr\abs{\rho^{[m]} - \rho'}.

We are in particular interested in the error as a function of the parameter mm to see if we can decrease the error by increasing the number of iterations for a fixed phase coefficient α.

The desired state is also defined as

ρ=γ,μργμeiα(σγγσμμ)γμ,\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, e^{i\alpha (\sigma_{\gamma\gamma} - \sigma_{\mu\mu})} \ket{\gamma}\bra{\mu},

given the initial state was

ρ=γ,μργμγμ.\rho = \sum_{\gamma,\mu} \rho_{\gamma\mu} \ket{\gamma}\bra{\mu}.

Recursive calculations

Let us start evaluating the recursive formula of the iterations

ρ[k]=trB[U(ρ[k1]σ)U].\rho^{[k]} = \tr_B{\left[U(\rho^{[k-1]}\otimes\sigma)U^\dagger\right]}.

As the first step, we substitute the definition of the partial phase operator

U=eiΔ2A=cos(Δ2)I+isin(Δ2)A,U = e^{i \frac{\Delta}{2} A} = \cos(\frac{\Delta}{2}) I + i \sin(\frac{\Delta}{2}) A,

into (165) to write

ρ[k]=trB[cos2(Δ2)ρ[k1]σ+icos(Δ2)sin(Δ2)[A,ρ[k1]σ]+sin2(Δ2)A(ρ[k1]σ)A].\begin{align} \rho^{[k]} = \tr_B \Bigg[\Bigg. \cos^2\left(\frac{\Delta}{2}\right) \rho^{[k-1]}\otimes\sigma &+ i\cos\left(\frac{\Delta}{2}\right)\sin\left(\frac{\Delta}{2}\right) \left[ A,\, \rho^{[k-1]}\otimes\sigma \right] \\ \nonumber &+ \sin^2\left(\frac{\Delta}{2}\right) A(\rho^{[k-1]}\otimes\sigma)A \Bigg.\Bigg]. \end{align}

Next, we take the partial trace which results into

ρ[k]=cos2(Δ2)ρ[k1]+icos(Δ2)sin(Δ2)trB[A,ρ[k1]σ]+sin2(Δ2)trB[A(ρ[k1]σ)A].\begin{align} \rho^{[k]} = \cos^2\left(\frac{\Delta}{2}\right) \rho^{[k-1]} &+ i\cos\left(\frac{\Delta}{2}\right)\sin\left(\frac{\Delta}{2}\right) \tr_B \left[ A,\, \rho^{[k-1]}\otimes\sigma \right] \\ \nonumber &+ \sin^2\left(\frac{\Delta}{2}\right) \tr_B \left[ A(\rho^{[k-1]}\otimes\sigma)A \right]. \end{align}

In order to simplify this equation, let us define

J=trB[A,ρ[k1]σ]J = \tr_B \left[ A,\, \rho^{[k-1]}\otimes\sigma \right]

and

K=trB[A(ρ[k1]σ)A],K = \tr_B \left[ A(\rho^{[k-1]}\otimes\sigma)A \right],

and to rewrite the equation as

ρ[k]=cos2(Δ2)ρ[k1]+icos(Δ2)sin(Δ2)J+sin2(Δ2)K.\begin{align} \rho^{[k]} = \cos^2\left(\frac{\Delta}{2}\right) \rho^{[k-1]} + i\cos\left(\frac{\Delta}{2}\right)\sin\left(\frac{\Delta}{2}\right) J + \sin^2\left(\frac{\Delta}{2}\right) K. \end{align}

Next, we will find the matrices JJ and KK.

Calculating JJ

We recognize that JJ has exactly the form of the matrix MM from (80) that we calculated in the derivations of UU. Using the calculations from the previous derivations, we can write

J=2γ,μργμ[k1](σγγσμμ)γμ.J = 2 \sum_{\gamma,\mu} \rho^{[k-1]}_{\gamma\mu} \, (\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \, \ket{\gamma}\bra{\mu}.

In the later error analysis, we will need to derive an algebraic form for the matrices JJ and KK. In order to do so, we introduce two matrices defined on the input states ρ and σ which will help in writing down algebraic forms for our matrices.

We define σ\overset{\smalltriangleup}{\sigma} as a matrix derived from σ which has the same diagonal elements as σ but all its off-diagonal elements are zero. Therefore, for the matrix σ as

σ=[σ00σ01σ0Lσ10σ11σ1LσL0σL1σLL]N×N\sigma = \begin{bmatrix} \sigma_{00} & \sigma_{01} & \dots & \sigma_{0L}\\ \sigma_{10} & \sigma_{11} & \dots & \sigma_{1L}\\ \vdots & \vdots & \ddots & \vdots\\ \sigma_{L0} & \sigma_{L1} & \dots & \sigma_{LL}\\ \end{bmatrix}_{N \times N}

the σ\overset{\smalltriangleup}{\sigma} matrix will be defined as

σ=[σ00000σ11000σLL]N×N\overset{\smalltriangleup}{\sigma} = \begin{bmatrix} \sigma_{00} & 0 & \dots & 0\\ 0 & \sigma_{11} & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & \sigma_{LL}\\ \end{bmatrix}_{N \times N}

It is easy to see that if this operator applies on ρ from left or right, we will respectively have

σρ=γ,μσγγργμγμ,\overset{\smalltriangleup}{\sigma}\rho = \sum_{\gamma,\mu} \sigma_{\gamma\gamma}\rho_{\gamma\mu} \, \ket{\gamma}\bra{\mu},

and

ρσ=γ,μσμμργμγμ.\rho\overset{\smalltriangleup}{\sigma} = \sum_{\gamma,\mu} \sigma_{\mu\mu}\rho_{\gamma\mu} \, \ket{\gamma}\bra{\mu}.

Therefore, we can write

[σ,ρ]=σρρσ=γ,μργμ(σγγσμμ)γμ.\left[\overset{\smalltriangleup}{\sigma},\, \rho \right] = \overset{\smalltriangleup}{\sigma}\rho - \rho\overset{\smalltriangleup}{\sigma} = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, (\sigma_{\gamma\gamma}-\sigma_{\mu\mu}) \, \ket{\gamma}\bra{\mu}.

Using the definition of σ\overset{\smalltriangleup}{\sigma}, we can write the JJ matrix from (172) as

J=2[σ,ρ[k1]],J = 2 \left[\overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} \right],

which is the form for JJ we will use later for the error analysis. Now we must similarly calculate the matrix KK.

Calculating KK

In order to simplify KK we use the definition of AA as a block matrix and write

A=γγγAγγ.A = \sum_\gamma \ket{\gamma}\bra{\gamma} \otimes A_{\gamma\gamma}.

Putting this definition in (170), we will have

K=trB[A(ρ[k1]σ)A]=trB[γ,μ(ργμ[k1]γμ)(AγγσAμμ)]=γ,μργμ[k1]tr(AγγσAμμ)γμ\begin{align} K = \tr_B \left[ A(\rho^{[k-1]}\otimes\sigma)A \right] &= \tr_B \Big[ \sum_{\gamma,\mu} \left( \rho^{[k-1]}_{\gamma\mu} \ket{\gamma}\bra{\mu} \right) \otimes \left( A_{\gamma\gamma}\sigma A_{\mu\mu} \right) \Big] \\ &= \sum_{\gamma,\mu} % \left( \rho^{[k-1]}_{\gamma\mu} \,\, \tr\Big( A_{\gamma\gamma}\sigma A_{\mu\mu} \Big) \ket{\gamma}\bra{\mu} % \right) \end{align}

Now, considering the form of matrices AγγA_{\gamma\gamma} derived in (111) and (112), one can calculate

tr(AγγσAμμ)=tr(AμμAγγσ)={1,γ=μ12(σγγ+σμμ),γμ\tr\Big( A_{\gamma\gamma} \sigma A_{\mu\mu} \Big) = \tr\Big( A_{\mu\mu} A_{\gamma\gamma} \sigma \Big) = \begin{cases} 1 \quad &,\, \gamma = \mu\\ 1-2(\sigma_{\gamma\gamma}+\sigma_{\mu\mu}) \quad &,\, \gamma \ne \mu \end{cases}

Using this lemma, we can write

K=γ,μργ,μ[k1]γμ2γμργ,μ[k1](σγγ+σμμ)γμ.K = \sum_{\gamma,\mu} \rho^{[k-1]}_{\gamma,\mu} \, \ket{\gamma}\bra{\mu} - 2 \sum_{\gamma\ne\mu} \rho^{[k-1]}_{\gamma,\mu} \,\, \Big( \sigma_{\gamma\gamma}+\sigma_{\mu\mu} \Big) \ket{\gamma}\bra{\mu}.

Simplifying the terms and using the definitions of the σ\overset{\smalltriangleup}{\sigma} and ρ\overset{\smalltriangleup}{\rho} matrices, we can finally write

K=ρ[k1]2{σ,ρ[k1]ρ},K = \rho^{[k-1]} - 2\left\{ \overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} - \overset{\smalltriangleup}{\rho} \right\},

where {,}\{\cdot, \cdot\} is the anti-commutator defined as

{ρ,σ}=ρσ+σρ.\{\rho, \sigma\} = \rho\sigma + \sigma\rho.

An important note is that the matrix ρ\overset{\smalltriangleup}{\rho} does not change at each iteration meaning that we have

ρ[0]=ρ[1]==ρ[m],\overset{\smalltriangleup}{\rho}^{[0]} = \overset{\smalltriangleup}{\rho}^{[1]} = \dots = \overset{\smalltriangleup}{\rho}^{[m]},

because the diagonal elements of ρ do not change after applying the partial phase operator at each iteration (see (153)) and that is why we can leave out the index k1k-1 of ρ[k1]\overset{\smalltriangleup}{\rho}^{[k-1]} in (183).

Writing down the recursion using JJ and KK

Using the calculated JJ and KK, we can rewrite (171) as

ρ[k]=cos2(Δ2)ρ[k1]+2icos(Δ2)sin(Δ2)[σ,ρ[k1]]+sin2(Δ2)(ρ[k1]2{σ,ρ[k1]ρ}),\begin{align} \rho^{[k]} = \cos^2\left(\frac{\Delta}{2}\right) \rho^{[k-1]} &+2i\cos\left(\frac{\Delta}{2}\right)\sin\left(\frac{\Delta}{2}\right) \left[\overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} \right] \\ \nonumber &+\sin^2\left(\frac{\Delta}{2}\right) \left(\rho^{[k-1]} - 2\left\{ \overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} - \overset{\smalltriangleup}{\rho} \right\}\right), \end{align}

which simplifies into

ρ[k]=ρ[k1]+2icos(Δ2)sin(Δ2)[σ,ρ[k1]]2sin2(Δ2){σ,ρ[k1]ρ}.\rho^{[k]} = \rho^{[k-1]} + 2i\cos\left(\frac{\Delta}{2}\right)\sin\left(\frac{\Delta}{2}\right) \left[\overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} \right]- 2\sin^2\left(\frac{\Delta}{2}\right) \left\{ \overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} - \overset{\smalltriangleup}{\rho} \right\}.

Now, we apply a Taylor expansion on the sin()\sin(\cdot) and cos()\cos(\cdot) functions for small Δ resulting into

ρ[k]=ρ[k1]+iΔ[σ,ρ[k1]]Δ22{σ,ρ[k1]ρ}+O(Δ3).\rho^{[k]} = \rho^{[k-1]} + i\Delta\left[\overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} \right]- \frac{\Delta^2}{2}\left\{ \overset{\smalltriangleup}{\sigma},\, \rho^{[k-1]} - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3).

Evaluating the recursive formula

The next step is to evaluate the recursive formula in (188) to derive a formula for ρ[k]\rho^{[k]} as a function of the initial state ρ[0]\rho^{[0]}.

Let us start by calculating the next recursion by putting the equivalent of ρ[k1]\rho^{[k-1]} as a function of ρ[k2]\rho^{[k-2]} in (188). By putting the term in the equation and ignoring the terms including Δ3\Delta^3 and higher powers of Δ, we will have

ρ[k]=ρ[k2]+iΔ[σ,ρ[k2]]Δ22{σ,ρ[k2]ρ}+iΔ[σ,ρ[k2]]Δ2[σ,[σ,ρ[k2]]]Δ22{σ,ρ[k2]ρ}+O(Δ3).\begin{align} \rho^{[k]} = \rho^{[k-2]} &+i\Delta \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} \right] -\frac{\Delta^2}{2} \left\{ \overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} - \overset{\smalltriangleup}{\rho} \right\} \\ \nonumber &+ i\Delta \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} \right] - \Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} \right] \right]\\ \nonumber &-\frac{\Delta^2}{2} \left\{ \overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3). \end{align}

Simplifying the terms, we will have

ρ[k]=ρ[k2]+2iΔ[σ,ρ[k2]]Δ2[σ,[σ,ρ[k2]]]Δ2{σ,ρ[k2]ρ}+O(Δ3).\begin{align} \rho^{[k]} = \rho^{[k-2]} + 2i\Delta \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} \right] - \Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} \right] \right] - \Delta^2 \left\{ \overset{\smalltriangleup}{\sigma}, \rho^{[k-2]} - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3). \end{align}

At this point, we have derived a formula for ρ[k]\rho^{[k]} as a function of ρ[k2]\rho^{[k-2]} in (190). We have to repeat calculating these recursive steps until we reach a formula for ρ[k]\rho^{[k]} as a function of ρ[0]\rho^{[0]}.

If we continue the calculations for qq recursive steps, we can see that ρ[k]\rho^{[k]} as a function of ρ[kq]\rho^{[k-q]} will be

ρ[k]=ρ[kq]+i(qΔ)[σ,ρ[kq]]Δ2(1+2++(q1))[σ,[σ,ρ[kq]]]qΔ22{σ,ρ[kq]ρ}+O(Δ3),\begin{align} \rho^{[k]} = \rho^{[k-q]} + i(q\Delta) \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-q]} \right] - \Delta^2(1+2+\dots+(q-1))\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho^{[k-q]} \right] \right] - q\frac{\Delta^2}{2} \left\{ \overset{\smalltriangleup}{\sigma}, \rho^{[k-q]} - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3), \end{align}

for any positive integer number qq. We can check that this equation matches the manual calculations we did previously for q=1q=1 and q=2q=2.

Using (191), we can find ρ[k]\rho^{[k]} as a function of ρ[0]=ρ\rho^{[0]}=\rho by choosing q=kq=k and writing

ρ[k]=ρ+i(kΔ)[σ,ρ]k(k1)2Δ2[σ,[σ,ρ]]kΔ22{σ,ρρ}+O(Δ3).\rho^{[k]} = \rho + i(k\Delta) \left[\overset{\smalltriangleup}{\sigma}, \rho \right] - \frac{k(k-1)}{2}\Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho \right] \right] - k\frac{\Delta^2}{2} \left\{ \overset{\smalltriangleup}{\sigma}, \rho - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3).

This means that the final state after applying partial phase operations with all the mm copies (k=mk=m) will be

ρ[m]=ρ+i(mΔ)[σ,ρ]m(m1)2Δ2[σ,[σ,ρ]]mΔ22{σ,ρρ}+O(Δ3).\rho^{[m]} = \rho + i(m\Delta) \left[\overset{\smalltriangleup}{\sigma}, \rho \right] - \frac{m(m-1)}{2}\Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho \right] \right] - m\frac{\Delta^2}{2} \left\{ \overset{\smalltriangleup}{\sigma}, \rho - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3).

This equation describes the state of the primary register after mm iterations (ρ[m]\rho^{[m]}) up to the second order approximation in Δ as a function of the number of iterations mm, the initial state ρ, the secondary state σ, and the phase step of each iteration Δ.

Next, we have to compare this output state with the desired final state.

The desired state

The initial state was

ρ=γ,μργμγμ,\rho = \sum_{\gamma,\mu} \rho_{\gamma\mu} \ket{\gamma}\bra{\mu},

and the desired state was

ρ=γ,μργμeiα(σγγσμμ)γμ.\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, e^{i\alpha(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})} \ket{\gamma}\bra{\mu}.

We can use Taylor expansion to write

ρ=γ,μργμ(1+iα(σγγσμμ)α22(σγγσμμ)2+O(α3))γμ.\rho' = \sum_{\gamma,\mu} \rho_{\gamma\mu} \, \left( 1+i\alpha(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})-\frac{\alpha^2}{2}(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})^2 + \mathcal{O}(\alpha^3) \right) \ket{\gamma}\bra{\mu}.

The partial phase operator was derived with the condition that it was correct up to the first-order approximation. This means we expect the calculations to be correct up to the first order therefore we keep the terms up to the second order in Δ to be able to analyze the accuracy.

We can further write

ρ=ρ+iαγ,μργμ(σγγσμμ)γμα22γ,μργμ(σγγσμμ)2γμ+O(α3).\rho' = \rho + i\alpha\sum_{\gamma,\mu}\rho_{\gamma\mu}(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})\ket{\gamma}\bra{\mu}-\frac{\alpha^2}{2}\sum_{\gamma,\mu}\rho_{\gamma\mu}(\sigma_{\gamma\gamma}-\sigma_{\mu\mu})^2 \ket{\gamma}\bra{\mu} + \mathcal{O}(\alpha^3).

Now using the definition of the σ\overset{\smalltriangleup}{\sigma} matrix, we can rewrite the equation as

ρ=ρ+iα[σ,ρ]α22[σ,[σ,ρ]]+O(α3).\rho' = \rho + i\alpha \left[\overset{\smalltriangleup}{\sigma}, \rho \right]-\frac{\alpha^2}{2}\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho \right]\right] + \mathcal{O}(\alpha^3).

This is the desired state Taylor-expanded up to the second-order approximation in α. We can see similar terms in this equation and the state at the output we calculated in (193). Next, we analyze these two states to calculate the error of the algorithm.

Error analysis

We can see that if we choose α=mΔ\alpha=m\Delta and substitute it in (198) of the desired state, we will have

ρ=ρ+i(mΔ)[σ,ρ]m22Δ2[σ,[σ,ρ]]+O(Δ3).\rho' = \rho + i(m\Delta) \left[\overset{\smalltriangleup}{\sigma}, \rho \right]-\frac{m^2}{2}\Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho \right]\right] + \mathcal{O}(\Delta^3).

And comparing this state with the output of the iterative algorithm ρ[m]\rho^{[m]} from (193), we can see that

ρ[m]ρ=m2Δ2[σ,[σ,ρ]]m2Δ2{σ,ρρ}+O(Δ3).\rho^{[m]}-\rho' = \frac{m}{2}\Delta^2\left[\overset{\smalltriangleup}{\sigma}, \left[\overset{\smalltriangleup}{\sigma}, \rho \right] \right] - \frac{m}{2}\Delta^2 \left\{ \overset{\smalltriangleup}{\sigma}, \rho - \overset{\smalltriangleup}{\rho} \right\} + \mathcal{O}(\Delta^3).

This means that for the trace distance error, we have

δ=12trρ[m]ρO(mΔ2).\delta = \frac{1}{2}\tr\abs{\rho^{[m]}-\rho'} \le \mathcal{O}(m\Delta^2).

With this, we have proved that the error is

δ=O(mΔ2)=O(α2m).\delta = \mathcal{O}(m\Delta^2) = \mathcal{O}(\frac{\alpha^2}{m}).

This means that if we choose the maximum tolerable error of the final output of the algorithm to be δ and would like to apply the phase profile f(γ)=αϕ(γ)2f(\gamma) = \alpha \abs{\phi(\gamma)}^2 to the primary register ρ, we need to iteratively apply the partial phase operator

m=O(α2δ)m = \mathcal{O}\left(\frac{\alpha^2}{\delta}\right)

times on the primary register using copies of the state σ where each phase step is equal to

Δ=αm.\Delta = \frac{\alpha}{m}.

This final remark ends the error analysis section. Now we have a complete understanding of the partial phase operator and its use in the sample-based phase algorithm.

We also note that the partial phase operator we derived previously corresponds to the case of one repetition m=1m=1 in the iterative phase algorithm, and in this case, the trace distance error will be δ=O(Δ2)\delta=\mathcal{O}(\Delta^2) which confirms our previous derivations where we derived UU to be accurate for small Δ if Δ20\Delta^2\approx 0 is negligible.

In the next section, we will put our derivations to test by implementing the algorithm on a computer and verifying the error dependencies derived in this section. But before moving on to the implementations, let us briefly discuss what we can infer about the fidelity based on this error analysis.

Fidelity analysis

The fidelity analysis is important to us because we will use it in the implementation in the next section to verify our derivations.

In the error analysis, we used the trace distance error

δ=D(ρ[m],ρ)=12trρ[m]ρ\delta = \text{D}(\rho^{[m]}, \rho') = \frac{1}{2} \tr\abs{\rho^{[m]} - \rho'}

as a measure for the similarity of the output ρ[m]\rho^{[m]} and the desired ρ\rho' states. If δ is zero, it means that the output state is identical to the expected state.

We can also use another measure for this similarity which is the quantum fidelity Wilde (2013) of the two states

f=F(ρ[m],ρ),f = F\left( \rho^{[m]}, \rho' \right),

where the fidelity of states ρ and σ is defined as

F(ρ,σ)=(trρσρ)2.F\left( \rho, \sigma \right) = \Bigg( \tr \sqrt{\sqrt{\rho}\sigma\sqrt{\rho}} \Bigg)^2.

Using this definition, the fidelity ff equal to one corresponds to the two states being identical.

It is difficult to calculate the fidelity using this formula in the general case when both input states are mixed states. Yet, there is a general inequality relation between the trace distance and the fidelity of two states as

1F(ρ,σ)D(ρ,σ)1F(ρ,σ).1-{\sqrt {F(\rho ,\sigma )}}\leq D(\rho ,\sigma )\leq {\sqrt {1-F(\rho ,\sigma )}}\,.

In the case when at least one of the states ρ or σ is pure, the fidelity formula reduces to a simplified equation as

F(ρ,σ)=trρσ,F\left( \rho, \sigma \right) = \tr \rho\sigma,

and one of the inequality bounds becomes tighter as

1F(ρ,σ)D(ρ,σ).1- F(\rho ,\sigma ) \leq D(\rho ,\sigma ).

As we will see in the implementation section, we can use this simplified case because usually, we would like to apply phases to a particular state which is initially a pure state

ρ=ψψ.\rho = \ket{\psi}\bra{\psi}.

This means that the ideal expected state after the application of the phase will also be a pure state

ρ=ψψ.\rho' = \ket{\psi'}\bra{\psi'}.

Therefore, the fidelity of the output of the phase algorithm and the expected state will have the simple form of

f=F(ρ[m],ρ)=tr(ρ[m]ρ)=ψρ[m]ψ.f = F\left( \rho^{[m]}, \rho' \right) = \tr \left( \rho^{[m]} \rho' \right) = \bra{\psi'} \rho^{[m]} \ket{\psi'}.

And by combining the inequality in (210) and the trace distance calculated in (201), we can in this case write

1fδO(mΔ2),1-f \leq \delta \leq \mathcal{O}(m\Delta^2),

which means

1f=O(mΔ2)=O(α2m).1-f = \mathcal{O}(m\Delta^2) = \mathcal{O}(\frac{\alpha^2}{m}).

We will see in the implementation section that there is a quantum subroutine that can directly calculate the fidelity of two states inside the quantum computer when one of the states is pure.

The ideal result for the phase algorithm is if the final output state is identical to the desired state; this case corresponds to the fidelity being equal to one. (215) shows how we can bring the fidelity ff close to 1 by increasing the repetition number mm for a fixed phase coefficient α using the relation

1f=O(α2m).1-f = \mathcal{O}(\frac{\alpha^2}{m}).

We will use this fidelity analysis to verify our algorithm in the implementation section.

6Simulation and verification

After deriving the phase algorithm and discussing its accuracy in the previous sections, we progress to implementing it and analyzing its error dependency in this section. We use the Python programming language Van Rossum & Drake (2009) and the Qiskit library A-v et al. (2021) to implement and run the algorithm. We use the quantum computer simulator provided by the library to simulate running the algorithm on a quantum computer.

We will start by defining a scheme for the verification procedure. Later, we run the scheme using different initial parameters such as the number of iterations mm and the partial phase step Δ and analyze the results in the following section. We will show that the errors behave as expected according to the derivations in the previous sections.

6.1Implementation scheme

Let us start by reviewing the important parameters for the implementation:

  • ψ(γ)\psi(\gamma): The initial state of the primary register.

    ψ=γψ(γ)γ\ket{\psi} = \sum_\gamma \psi(\gamma) \ket{\gamma} \nonumber
  • f(γ)f(\gamma): The phase profile we would like to apply consisting of a normalized profile ϕ(γ)2\abs{\phi(\gamma)}^2 and a phase coefficient α

    f(γ)=αϕ(γ)2f(\gamma) = \alpha \abs{\phi(\gamma)}^2 \nonumber
  • Δ: The phase step that is applied at each iteration using the partial phase operator.

  • mm: The number of iterations which is related to the other parameters as

    α=mΔ.\alpha = m \Delta. \nonumber

Considering these parameters, we take the following steps to implement a verification procedure. We chose the quantum registers of the primary and the secondary states to consist of n=3n=3 qubits for this demonstration, corresponding to N=23=8N=2^3=8 number of sample points stored in each register.

  1. Primary state: We choose a superposition of all basis states as the initial state of the primary register as

    ψ=1Nγγ,\ket{\psi} = \frac{1}{\sqrt{N}} \sum_\gamma \ket{\gamma}, \nonumber

    and we initialize the primary register to this state at the beginning of the algorithm. Any other state could have been chosen as the initial state but we chose this homogeneous state in order to have all the basis states in the superposition allowing us to characterize the phase algorithm acting on all the basis states.

  2. Secondary states: Next, we pick a normalized phase profile ϕ(γ)\phi(\gamma) to apply to the primary register. We will use ϕ(γ)\phi(\gamma) as the state of the secondary register in the phase algorithm. The aim is to use this phase profile along with different numbers of iterations mm and phase steps Δ to apply an overall phase to the primary register according to the relation

    f(γ)=αϕ(γ)2=mΔϕ(γ)2.f(\gamma) = \alpha \abs{\phi(\gamma)}^2 = m\Delta \abs{\phi(\gamma)}^2.

    This means that for different cases of mm, we need to prepare mm copies of the state

    ϕ=γϕ(γ)γ\ket{\phi} = \sum_\gamma \phi(\gamma) \ket{\gamma} \nonumber

    for the partial phase operators.

    For this demonstration, we choose ϕ\ket{\phi} to be

    ϕ=[00.60.200.20.40.60.2].\ket{\phi} = \begin{bmatrix} 0\\ 0.6\\ 0.2\\ 0\\ 0.2\\ 0.4\\ 0.6\\ 0.2 \end{bmatrix}.

    Using this state as the secondary states, we expect the output state of the primary register to be

    ψ=1Nγeif(γ)γ=1NγeimΔϕ(γ)2γ,\ket{\psi'} = \frac{1}{\sqrt{N}} \sum_\gamma e^{if(\gamma)} \ket{\gamma} = \frac{1}{\sqrt{N}} \sum_\gamma e^{im\Delta \abs{\phi(\gamma)}^2} \ket{\gamma}, \nonumber

    for each case of mm and Δ,

  3. The phase algorithm: Now that we have prepared all the quantum registers, we will iteratively use each of the mm copies of ϕ\ket{\phi} and apply the partial phase operator UU on the primary register using the ϕ\ket{\phi} states. We call the final state of the primary register at the output of this algorithm ρ[m]\rho^{[m]} and we expect it to be equal to ψψ\ket{\psi'}\bra{\psi'}.

  4. Fidelity analysis: And finally in order to analyze the accuracy of the algorithm, we use the fidelity of ρ[m]\rho^{[m]} and ρ=ψψ\rho'= \ket{\psi'}\bra{\psi'} as a measure for the accuracy.

    f=F(ρ[m],ρ).f = F\left( \rho^{[m]}, \rho' \right). \nonumber

    As discussed in the derivations section, because the desired state ρ\rho' is a pure state, we expect the fidelity to behave as

    1f=O(mΔ2),1-f = \mathcal{O}(m\Delta^2), \nonumber

    for different cases of mm and Δ, and the formula for the fidelity to have a simple form as

    f=tr(ρ[m]ρ)=ψρ[m]ψ.f = \tr \left( \rho^{[m]} \rho' \right) = \bra{\psi'} \rho^{[m]} \ket{\psi'}. \nonumber

    In order to calculate the fidelity, we use a subroutine that directly acts on the output register to estimate the fidelity using the quantum computer.

    It is important to note that because the phase algorithm only changes the phases of the initial state, directly measuring the output state ρ[m]\rho^{[m]} in the computational basis and sampling the corresponding probability amplitudes P(γ)=γρ[m]γP(\gamma)= \bra{\gamma}\rho^{[m]}\ket{\gamma} will not reveal information about the applied phases. That is why we use such a subroutine to calculate the fidelity directly using the quantum computer, instead of measuring the output state and classically calculating the fidelity afterwards using its formula.

    Another possibility was to use the quantum phase estimation (QPE) algorithm Kitaev (1995)Nielsen & Chuang (2010) to estimate the applied phases but it is not practical because one has to run QPE for each basis state γ\ket{\gamma} to estimate the individual phase terms eif(γ)e^{if(\gamma)} applied to the corresponding basis.

    The subroutine Miszczak et al. (2009) consists of an ancilla qubit and two input registers that have the same number of qubits. Let us call the ancilla qubit a\ket{a} and the input registers ρ1\rho_1 and ρ2\rho_2. The subroutine is plotted in Figure 8.

    A quantum subroutine calculating the overlap of two states \tr \left( \rho_1 \rho_2 \right).

    Figure 8:A quantum subroutine calculating the overlap of two states tr(ρ1ρ2)\tr \left( \rho_1 \rho_2 \right).

    It is straightforward to show that the expected value of the Pauli-ZZ operator on the ancilla qubit at the output a\ket{a'} is equal to the overlap of the input states as

    aZ^a=tr(ρ1ρ2).\bra{a'}\hat{Z}\ket{a'} = \tr\left( \rho_1\rho_2 \right). \nonumber

    Therefore, we can feed in the output of the phase algorithm ρ[m]\rho^{[m]} as one of the inputs ρ1=ρ[m]\rho_1=\rho^{[m]} and initialize the second register in the expected state ρ2=ρ=ψψ\rho_2=\rho'=\ket{\psi'}\bra{\psi'}; then using this subroutine, we can calculate the fidelity as

    aZ^a=tr(ρ[m]ρ)=f.\bra{a'}\hat{Z}\ket{a'} = \tr\left( \rho^{[m]}\rho' \right) = f. \nonumber

    Note that the overlap is equal to the fidelity in this case only because one of the input states is pure ρ=ψψ\rho'=\ket{\psi'}\bra{\psi'} where

    ψ=1Nγeif(γ)γ.\ket{\psi'} = \frac{1}{\sqrt{N}} \sum_\gamma e^{if(\gamma)} \ket{\gamma}. \nonumber

    We also note that the expected value of the ZZ operator is nothing but the difference of the probabilities of measuring the ancilla qubit in the state 0\ket{0} or 1\ket{1}

    aZ^a=P(0)P(1).\bra{a'}\hat{Z}\ket{a'} = P(0) - P(1).
  1. Statistical considerations: Each time we carry on the steps 1 to 4 and measure the ancilla qubit of the fidelity calculator subroutine, we will measure the ancilla qubit being in one of the states 0\ket{0} or 1\ket{1}. In order to estimate the fidelity, we need to repeat the steps several times to have an accurate estimation of ff. We call the number of times we repeat the steps for estimating a specific ff “the number of shots” and denote it as NsN_s.

    After each case of running the scheme, we will end up with an estimation for the fidelity ff for a specific mm and Δ, and if we repeat that for different mm and Δ, we will have an estimation of the fidelities as a function of the parameters as

    f(m,Δ).f(m, \Delta). \nonumber

    However, because we are also interested in the statistical errors of the calculations, we will repeat the calculations for each f(m,Δ)f(m, \Delta) several times in order to calculate the average and standard deviation for each case and to be able to calculate the corresponding error bars and ensure reproducibility. We call this repetition number NrN_r.

The preceding verification procedure is schematically shown in Figure 9.

The schematic figure of the verification procedure

Figure 9:The schematic figure of the verification procedure

We run the verification for the cases of the phase steps

Δ{0.1,0.2,0.3,0.4,0.5,0.6,0.7}\Delta \in \{ 0.1,\, 0.2,\, 0.3,\, 0.4,\, 0.5,\, 0.6,\, 0.7 \} \nonumber

and the number of repetitions

m{0,1,2,3,4}.m \in \{ 0,\, 1,\, 2,\, 3,\, 4 \}. \nonumber

In the end, we will end up with an estimation of the fidelity f(m,Δ)f(m, \Delta) for the above-mentioned parameters and verify if it demonstrates the expected functionality we derived in our calculations

1f(m,Δ)=O(mΔ2).1 - f(m, \Delta) = \mathcal{O}(m\Delta^2).

6.2Analysis of the results

Let us provide and discuss the results of the simulations.

For the estimation of the fidelity in each case, we repeated the algorithm for Ns=10000N_s=10000 number of shots, and in order to analyze the statistics, we calculated each fidelity Nr=50N_r=50 times.

We can see the results in Figure 10 and Figure 11. Figure 10 plots the fidelity as a function of mm (the number of iterations) for the cases of different Δ (phase step of the partial phase operators); and Figure 11 plots the fidelity as a function of Δ, for the cases of different iterations mm. The dashed lines in Figure 10 and Figure 11 are respectively, linear and quadratic regressions of the measured fidelities.

The error bar plot of the measured fidelity at the output of the phase algorithm as a function of m for different phase steps \Delta and the linear regression lines.

Figure 10:The error bar plot of the measured fidelity at the output of the phase algorithm as a function of mm for different phase steps Δ and the linear regression lines.

The error bar plot of the measured fidelity at the output of the phase algorithm as a function of \Delta for different number of iterations m and the quadratic regression lines.

Figure 11:The error bar plot of the measured fidelity at the output of the phase algorithm as a function of Δ for different number of iterations mm and the quadratic regression lines.

As we can see, the results confirm the linear and quadratic behaviors of the fidelity as a function of the parameters mm and Δ as expected in (234). Using the regressions, we can calculate the proportionality coefficient in (234) to be

1fβmΔ2,1-f \approx \beta \, m\Delta^2, \nonumber

for the range of mm and Δ we used in our case, where β is

β=0.078±0.007.\beta = 0.078 \pm 0.007 \,.

Using this verification scheme, we managed to characterize the accuracy of the phase algorithm using quantum fidelity. However, we can also analyze the output state of the algorithm ρ[m]\rho^{[m]} in terms of wavefunctions instead of calculating the fidelity using the fidelity calculator subroutine.

Analyzing the wavefunctions

We know that the output of the phase algorithm is, in general, described as a density matrix ρ[m]\rho^{[m]}. Therefore, we cannot represent it using just a single wavefunction. What we can do instead is to diagonalize this Hermitian matrix in its orthonormal basis as

ρ[m]=i=0Npiηiηi,\rho^{[m]} = \sum_{i=0}^{N} p_i \ket{\eta_i}\bra{\eta_i},

where pip_i are the eigenvalues and correspond to the probability of ρ[m]\rho^{[m]} being in the state ηiηi\ket{\eta_i}\bra{\eta_i}. We also have ipi=1\sum_i p_i = 1.

The generally high fidelity in our previous simulations suggests that the output state ρ[m]\rho^{[m]} is close to the desired state ψψ\ket{\psi'}\bra{\psi'}. It means we expect one of the probabilities pip_i to be close to one and most of the others to be close to zero; and therefore, just one of the states ηi\ket{\eta_i} to significantly contribute to ρ[m]\rho^{[m]}, and we expect this state to be close to the desired state ψ\ket{\psi'}. We can therefore choose the highest probability pmax=maxpip_{\text{max}} = \max p_i and call its corresponding state η\ket{\eta} from the diagonalization, then we can use this state to approximately represent the output density matrix ρ[m]\rho^{[m]} as long as the probability pmaxp_{\text{max}} is close to one.

This allows us to compare the wavefunction of the state η\ket{\eta} to the expected wavefunction of ψ\ket{\psi'} by plotting them in a figure. In the following, we will present the results of such simulations.

For this section, we choose a fixed phase coefficient α=1\alpha=1 and realize it in five cases

{m=5Δ=0.2,{m=4Δ=0.25,{m=3Δ=13,{m=2Δ=0.5,{m=1Δ=1.\begin{cases} m=5 \\ \Delta= 0.2 \end{cases},\quad \begin{cases} m=4 \\ \Delta= 0.25 \end{cases},\quad \begin{cases} m=3 \\ \Delta= \frac{1}{3} \end{cases},\quad \begin{cases} m=2 \\ \Delta= 0.5 \end{cases},\quad \begin{cases} m=1 \\ \Delta= 1 \end{cases}.

According to the previous error analysis, we expect the accuracy of the phase algorithm to decrease by decreasing the number of iterations for a fixed α.

For each case, we run the phase algorithm as previously described in the scheme but only take steps 1 to 3. Then, instead of performing the fidelity analysis, we extract the density matrix ρ[m]\rho^{[m]} at the output from the Qiskit simulation, diagonalize it, and plot the phase and the amplitude of the wavefunction of the highest contributing state. We note that this simple extraction of the matrix ρ[m]\rho^{[m]} is possible only because we are using the quantum computer simulator of Qiskit, whereas, in a real quantum computer, one has to perform state tomography to estimate the density matrix. We also note that there are no statistical considerations involved as we are not sampling the results of measuring an observable but analyzing the results directly using the states.

We can see the results of the simulations in Figure 12 to Figure 16. In each figure, the wavefunction η(γ)\eta(\gamma) corresponding to the state η\ket{\eta} is plotted along with the desired wavefunction ψ(γ)\psi'(\gamma) as a reference. In the figures, we have also included the probability pmaxp_{\text{max}}, to which the state η\ket{\eta} corresponds in the diagonalization of ρ[m]\rho^{[m]}.

The comparison of the highest contributing wavefunction \eta(\gamma) to the output state \rho^{[m]} with the expected wavefunction \psi'(\gamma), in the case of m=5, \Delta=0.2.

Figure 12:The comparison of the highest contributing wavefunction η(γ)\eta(\gamma) to the output state ρ[m]\rho^{[m]} with the expected wavefunction ψ(γ)\psi'(\gamma), in the case of m=5m=5, Δ=0.2\Delta=0.2.

The comparison of the highest contributing wavefunction \eta(\gamma) to the output state \rho^{[m]} with the expected wavefunction \psi'(\gamma), in the case of m=4, \Delta=0.25.

Figure 13:The comparison of the highest contributing wavefunction η(γ)\eta(\gamma) to the output state ρ[m]\rho^{[m]} with the expected wavefunction ψ(γ)\psi'(\gamma), in the case of m=4m=4, Δ=0.25\Delta=0.25.

The comparison of the highest contributing wavefunction \eta(\gamma) to the output state \rho^{[m]} with the expected wavefunction \psi'(\gamma), in the case of m=3, \Delta=\frac{1}{3}.

Figure 14:The comparison of the highest contributing wavefunction η(γ)\eta(\gamma) to the output state ρ[m]\rho^{[m]} with the expected wavefunction ψ(γ)\psi'(\gamma), in the case of m=3m=3, Δ=13\Delta=\frac{1}{3}.

The comparison of the highest contributing wavefunction \eta(\gamma) to the output state \rho^{[m]} with the expected wavefunction \psi'(\gamma), in the case of m=2, \Delta=0.5.

Figure 15:The comparison of the highest contributing wavefunction η(γ)\eta(\gamma) to the output state ρ[m]\rho^{[m]} with the expected wavefunction ψ(γ)\psi'(\gamma), in the case of m=2m=2, Δ=0.5\Delta=0.5.

The comparison of the highest contributing wavefunction \eta(\gamma) to the output state \rho^{[m]} with the expected wavefunction \psi'(\gamma), in the case of m=1, \Delta=1.

Figure 16:The comparison of the highest contributing wavefunction η(γ)\eta(\gamma) to the output state ρ[m]\rho^{[m]} with the expected wavefunction ψ(γ)\psi'(\gamma), in the case of m=1m=1, Δ=1\Delta=1.

As we can see in the figures, the η(γ)\eta(\gamma) wavefunction closely resembles the expected wavefunction ψ(γ)\psi'(\gamma) in most cases. Only when Δ reaches the higher values of Δ=0.5\Delta = 0.5 and Δ=1\Delta = 1, the wavefunction significantly deviates from the expected ψ(γ)\psi'(\gamma). Comparing the pmaxp_{\text{max}} values, we can also see how the ρ[m]\rho^{[m]} density matrix deviates from being a pure state as Δ increases.

This simulation is in agreement with the previous fidelity analysis and allows us to have a look at the output wavefunction itself in the weak sense of the highest contributing eigenstate in the diagonalization of ρ[m]\rho^{[m]}.

With this, we will conclude the verification of the phase algorithm we derived previously in Section 5 and the main body of this thesis report.

7Conclusion

In this work, we derived a quantum phase algorithm that receives two input states and is capable of applying phases to one of them based on the state of the other. The algorithm transforms an initial state

ψ=γψ(γ)γ\ket{\psi} = \sum_\gamma \psi(\gamma) \ket{\gamma}

to the state which is approximately equal to

ψ=γψ(γ)eiαϕ(γ)2γ,\ket{\psi'} = \sum_\gamma \psi(\gamma) \, e^{i\alpha\abs{\phi(\gamma)}^2} \ket{\gamma},

given mm copies of the state

ϕ=γϕ(γ)γ,\ket{\phi} = \sum_\gamma \phi(\gamma) \ket{\gamma},

with a final trace distance error of

δ=O(α2m).\delta = \mathcal{O}\left( \frac{\alpha^2}{m} \right).

Assuming the registers are initialized to the input states, the gate complexity of each step of the algorithm is

O(n)=O(logN),\mathcal{O}(n) = \mathcal{O}(\log N),

where nn is the number of the qubits of the registers.

The important property of this algorithm is that the phase profile that is applied to the primary register is provided to the quantum computer as another state encoded in a quantum register. This property reduces the problem of applying phase profiles to a quantum register to the problem of initialization. As long as we can initialize the phase profile efficiently in a quantum memory, we can apply a corresponding phase to our primary register.

7.1Remarks and limitations

We note that for the cases of small phase coefficients α1\alpha \ll 1, the phase algorithm has an excellent performance both in terms of gate complexity and memory consumption since the error O(α2m)\mathcal{O}\left( \frac{\alpha^2}{m} \right) will still be low even using one iteration m=1m=1.

However, the main limitation of the algorithm in its current form is the memory consumption for the cases of large α because of the square dependency in the error. Because the error increases as δα2\delta \propto \alpha^2, if we would like to achieve a constant error while increasing α, we have to also increase the number of samples used in the iterations as mα2m \propto \alpha^2.

We can propose several ways to deal with this limitation.

  1. In our discussions in this thesis report, we assumed we wanted to apply phase terms

    eif(γ)e^{if(\gamma)}

    corresponding to a phase profile f(γ)f(\gamma), and α was the normalization factor of the phase profile f(γ)=αϕ(γ)2f(\gamma)=\alpha\abs{\phi(\gamma)}^2.

    First, we realize that global phases do not result in different quantum states. This means that adding a constant real number cc to ff will not change the final state of the output of the phase algorithm

    f~(γ)=f(γ)+c.\Tilde{f}(\gamma) = f(\gamma) + c.

    We also realize that adding multiple factors of 2π2\pi will not change the output state either

    f~(γ)=f(γ)+2πk(γ),\Tilde{f}(\gamma) = f(\gamma) + 2\pi k(\gamma),

    because both phase profiles will correspond to the same phase terms eif~(γ)=eif(γ)e^{i\Tilde{f}(\gamma)}=e^{if(\gamma)}; the k(γ)k(\gamma) factors are integer numbers that can generally be different for different phase terms f(γ)f(\gamma).

    Now combining both of the observations, we recognize that we have the freedom of constructing a phase profile f~\Tilde{f} based on the initial desired profile ff as

    f~(γ)=f(γ)+2πk(γ)+c,\Tilde{f}(\gamma) = f(\gamma) + 2\pi k(\gamma) + c,

    and using f~\Tilde{f} as the phase profile in the phase algorithm will result in the same output state as in the case of using ff.

    This freedom of choosing the phase profile combined with the fact that phases can always be bounded between 0f~(γ)2π0 \le \Tilde{f}(\gamma) \le 2\pi will allow us to construct a phase profile f~\Tilde{f} for any ff that in some cases can potentially have smaller definite integral thus a smaller normalization factor α~<α\Tilde{\alpha} < \alpha

    f~(γ)=α~ϕ~(γ)2.\Tilde{f}(\gamma)=\Tilde{\alpha}\abs{\Tilde{\phi}(\gamma)}^2.

    Using the new ϕ~(γ)\Tilde{\phi}(\gamma) state and the smaller phase coefficient α~\Tilde{\alpha}, we will need less number of iterations compared to the original case of using f(γ)f(\gamma).

  2. Continuing the previous suggestion, we also note that because of the bounds 0f~(γ)2π0 \le \Tilde{f}(\gamma) \le 2\pi, there will be an upper limit to the definite integral and thus the phase coefficient α, as we can always limit the phase between these bounds no matter how large the profile f(γ)f(\gamma) is.

    Therefore, we can recognize that the squared dependency in O(α2m)\mathcal{O}\left( \frac{\alpha^2}{m} \right) is only true for small α and for large α, the error will hit a limit in terms of α dependency. One can investigate this effect in more details to find the exact relation of the error for large α.

7.2Outlook

The phase algorithm introduced in this work is useful for many interesting applications. The key concept for making use of this algorithm is to find a way to model a specific problem as a problem of phase transformation. We briefly touched upon possible applications in the introduction of this thesis but will now suggest them as use cases of the phase algorithm worthy of further investigations.

Before discussing the applications, let us look into the relevance of the quantum Fourier transformation (QFT) Nielsen & Chuang (2010). Combining the phase algorithm with the QFT greatly broadens the range of problems solvable by the phase algorithm. Assume we are given a signal u(x)u(x), and we have discretize it with a sampling period of Δx\Delta x and stored the discretized signal u(xγ)u(x_\gamma) in our quantum register as

ψ=γ=N2N21u(xγ)γ,\ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} u(x_\gamma) \ket{\gamma},

where xγ=γΔxx_\gamma = \gamma \Delta x. Applying the QFT operator on this register will result in

Ψ=QFT^ψ=γ=N2N21U(kγ)γ,\ket{\Psi} = \widehat{QFT} \ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} U(k_\gamma) \ket{\gamma},

where U(kγ)U(k_\gamma) is the discrete Fourier spectrum of the signal u(xγ)u(x_\gamma) and kγ=γΔk=γ2πNΔxk_\gamma = \gamma\Delta k = \gamma \frac{2\pi}{N\Delta x}.

We note that the QFT algorithm is implemented using O((logN)2)\mathcal{O}\left( (\log N)^2 \right) number of gates.

Beam propagation simulation

The ability to transform between real and Fourier domains enables us to simulate various cases of the propagation of a monochromatic field. The problem of beam propagation starts with an initial field in the transverse plane perpendicular to the direction of motion and deals with calculating the field distribution after propagating through a medium. In the following discussion, we assume one transverse direction xx and the propagation direction to be zz.

Starting with the initial field u(x)u(x), arbitrary phase masks t(x)t(x) can be applied in the real domain by applying the phase algorithm on the ψ\ket{\psi} register as

ψψ=γ=N2N21u(xγ)eit(xγ)γ.\ket{\psi} \longrightarrow \ket{\psi'} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} u(x_\gamma) \, e^{it(x_\gamma)} \ket{\gamma}.

Phase masks also include the special case of a thin lens with the focal length ff where the effect of the lens can be modeled via a quadratic phase Saleh & Teich (1991) as

t(xγ)=2πλxγ22f,t(x_\gamma) = -\frac{2\pi}{\lambda} \frac{x_\gamma^2}{2f},

where λ is the wavelength of the monochromatic field.

Furthermore, the propagation of the beam for a distance zz in a medium having a known dispersion relation can be modeled by applying a phase transfer function to the Fourier spectrum of the field

H(kγ;z)=eik2kγ2z,H(k_\gamma; z) = e^{i \sqrt{k^2 - k_\gamma^2} \, z},

where kk is derived from a dispersion relation k2(λ)=(2πλ)2n2(λ)k^2(\lambda)=\left(\frac{2\pi}{\lambda}\right)^2 n^2(\lambda), where n(λ)n(\lambda) is the refractive index. As long as we deal with real refractive indices and can ignore large Fourier components such that the term k2kγ2\sqrt{k^2 - k_\gamma^2} stays real, the transfer function is just applying a phase; hence, we can simulate the propagation using the phase algorithm acting on the Fourier representation of the field U(kγ)U(k_\gamma). We also note that one can use Taylor expansion to approximate the phase term in the transfer function. The most notable case is the well-known paraxial approximation where the phase term is approximated using a quadratic term Saleh & Teich (1991).

We can combine the propagation and phase mask elements in a sequence to simulate various optical experiments.

Hamiltonian simulation in quantum mechanics

We can also use QFT and the phase algorithm to simulate quantum mechanical systems. Assuming we start with a wavefunction stored in the quantum register

ψ=γ=N2N21ψ(xγ)γ,\ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} \psi(x_\gamma) \ket{\gamma},

the momentum representation of this state can be calculated using the QFT as

Ψ=QFT^ψ=γ=N2N21Ψ(pγ)γ,\ket{\Psi} = \widehat{QFT} \ket{\psi} = \sum_{\gamma=-\frac{N}{2}}^{\frac{N}{2}-1} \Psi(p_\gamma) \ket{\gamma},

where pγ=kγ=γ2πNΔxp_\gamma = \hbar k_\gamma = \gamma \frac{2\pi\hbar}{N\Delta x} using de Broglie relation Broglie (1970).

This allows us to simulate any Hamiltonian consisting of a kinetic energy TT term as a function of the momentum operator P^\hat{P} and a potential energy VV term as a function of the position operator X^\hat{X}

H^=T(P^)+V(X^).\hat{H} = T(\hat{P}) + V(\hat{X}).

The Hamiltonian, for example, can correspond to the harmonic oscillator where H^=P^22m+12kX^2\hat{H} = \frac{\hat{P}^2}{2m} + \frac{1}{2}k\hat{X}^2.

The unitary operator of the time evolution is thus

U^=ei(T(P^)+V(X^))t.\Hat{U} = e^{-\frac{i}{\hbar}\left( T(\hat{P}) + V(\hat{X}) \right)t}.

Using the Lie–Trotter product formula Trotter (1959), we can write the evolution operator as a sequence of individual phase operators in terms of position or momentum operators

U^(eiT(P^)ΔteiV(X^)Δt)m,\Hat{U} \approx \left( e^{-\frac{i}{\hbar} T(\hat{P}) \Delta t} \,\, e^{-\frac{i}{\hbar} V(\hat{X}) \Delta t} \right)^m,

where Δt=tm\Delta t = \frac{t}{m}, and the approximation will become an equality for limm\lim_{m\to\infty}.

As long as mm is large enough to make the phase profiles fp(p)=iT(p)Δtf_p(p) = -\frac{i}{\hbar}T(p)\Delta t and fx(x)=iV(x)Δtf_x(x) = -\frac{i}{\hbar}V(x)\Delta t small, these phase transformations can be done using the partial phase operator. This means that we can implement the time evolution unitary by combining 2m2m partial phase operators plus mm forward and mm backward QFTs. The computational complexity as a function of NN in this case will thus be O((logN)2)\mathcal{O}\left( (\log N)^2 \right).

Note that the phase operator eiV(X^)Δte^{-\frac{i}{\hbar}V(\hat{X})\Delta t} applies the phase to the position representation and therefore can be implemented only using the partial phase operator. However, the phase operator eiT(P^)Δte^{-\frac{i}{\hbar}T(\hat{P})\Delta t} applies the phase to the momentum representation, thus, we need to transform the register to the momentum representation using a QFT, apply the phase, and transform back to the real domain using an inverse QFT.

Reusing the secondary registers

Another interesting open question is the possibility of reusing the secondary quantum registers in the iterations of the phase algorithm. In its current form, the phase algorithm needs mm secondary quantum registers which are used in each iteration to apply the phase step Δ and discarded afterward. The state of each secondary register changes after the application of the partial phase operator and is slightly entangled with the primary register.

This entanglement does not allow us to reset the register by directly measuring and re-initializing it in the original state because measurement will force the entangled wavefunction of the primary and secondary registers to collapse, thus, part of the information to be lost. These mm registers therefore have to be kept unmeasured until the end of the phase algorithm which is unfavorable for the case of very large mm.

We suggest investigating the possibility of performing local operations on the secondary registers after each partial phase operation to prepare them for non-destructive measurements that do not significantly disturb the primary register. If possible, this will enable reusing the secondary registers for the next iterations, eliminating the need for a growing amount of memory proportional to the number of iterations mm.

Acknowledgments

I would like to thank my supervisors and colleagues who have supported me working on this project.

I would also like to acknowledge the support of the Max Planck School of Photonics graduate school that has offered me a scholarship during my Master’s studies enabling me to fully dedicate my focus on this research work.

The figures containing quantum circuits in the thesis were created using a LaTeX package named yquant by Benjamin Desef:

Desef, Benjamin. “Yquant: Typesetting Quantum Circuits in a Human-Readable Language.” arXiv, September 4, 2021. DOI: Desef (2020).

References
  1. Shor, P. W. (1997). Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM Journal on Computing, 26(5), 1484–1509. 10.1137/S0097539795293172
  2. Grover, L. K. (1996). A Fast Quantum Mechanical Algorithm for Database Search. Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, 212–219. 10.1145/237814.237866
  3. Montanaro, A. (2016). Quantum algorithms: an overview. Npj Quantum Information, 2(1), 15023. 10.1038/npjqi.2015.23
  4. Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press. 10.1017/CBO9780511976667
  5. Saleh, B. E. A., & Teich, M. C. (1991). Electromagnetic Optics. In Fundamentals of Photonics (pp. 157–192). John Wiley & Sons, Ltd. 10.1002/0471213748.ch5