xt
To, co będziesz robić, to ukryta wersja Metody propagacji wiązki do optycznej propagacji przez falowód o zmiennym przekroju (analogicznie do potencjałów zmieniających się w czasie), więc warto byłoby to również sprawdzić.
Sposób, w jaki patrzę na SSFM / BPM jest następujący. Jego podstawą jest formuła produktu Trottera z teorii Liego:
limm→∞(exp(Dtm)exp(Vtm))m=exp((D+V)t)(1)
x−yx−y−zψ(x,y,z)tNΨ (for a 1024×1024 grid we have N=10242=1048576) and then your Schrödinger equation is of the form:
dtΨ=KΨ=(D+V(t))Ψ(2)
where K=D+V is an N×N skew-Hermitian matrix, an element of u(N), and Ψ is going to be mapped with increasing time by an element of the one parameter group exp(Kt). (I've sucked the iℏ factor into the K=D+V on the RHS so I can more readily talk in Lie theoretic terms). Given the size of N, the operators' natural habitat U(N) is a thoroughly colossal Lie group so PHEW! yes I am still talking in wholly theoretical terms!. Now, what does D+V look like? Still imagining for now, it could be thought of as a finite difference version of iℏ∇2/(2m)−iℏ−1V0+iℏ−1(V0−V(x,y,z,t0)), where V0 is some convenient "mean" potential for the problem at hand.
We let:
DV==iℏ2m∇2−iℏ−1V0iℏ−1(V0−V(x,y,z,t))(3)
Why I have split them up like this will become clear below.
The point about D is that it can be worked out analytically for a plane wave: it is a simple multiplication operator in momentum co-ordinates. So, to work out Ψ↦exp(ΔtD)Ψ, here are the first three steps of a SSFM/BPM cycle:
- Impart FFT to dataset Ψ to transform it into a set Ψ~ of superposition weights of plane waves: now the grid co-ordinates have been changed from x,y,z to kx,ky,kz;
- Impart Ψ~↦exp(ΔtD)Ψ~ by simply multiplying each point on the grid by exp(iΔt(V0−k2x+k2y+k2z)/ℏ);
Impart inverse FFT to map our grid back to exp(ΔtD)Ψ
.Now we're back in position domain. This is the better domain to impart the operator V of course: here V is a simple multiplication operator. So here is your last step of your algorithmic cycle:
Impart the operator Ψ↦exp(ΔtV)Ψ by simply multiplying each point on the grid by the phase factor exp(iΔt(V0−V(x,y,z,t))/ℏ)
....and then you begin your next Δt step and cycle over and over. Clearly it is very easy to put time-varying potentials V(x,y,z,t) into the code.
So you see you simply choose Δt small enough that the Trotter formula (1) kicks in: you're simply approximating the action of the operator exp(D+VΔt)≈exp(DΔt)exp(VΔt) and you flit back and forth with your FFT between position and momentum co-ordinates, i.e. the domains where V and D are simple multiplication operators.
Notice that you are only ever imparting, even in the discretised world, unitary operators: FFTs and pure phase factors.
One point you do need to be careful of is that as your Δt becomes small, you must make sure that the spatial grid spacing shrinks as well. Otherwise, suppose the spatial grid spacing is Δx. Then the physical meaning of the one discrete step is that the diffraction effects are travelling at a velocity Δx/Δt; when simulating Maxwell's equations and waveguides, you need to make sure that this velocity is much smaller than c. I daresay like limits apply to the Schrödinger equation: I don't have direct experience here but it does sound fun and maybe you could post your results sometime!
A second "experience" point with this kind of thing - I'd be almost willing to bet this is how you'll wind up following your ideas. We often have ideas that we want to do simple and quick and dirty simulations but it never quite works out that way! I'd begin with the SSFM as I've described above as it is very easy to get running and you'll quickly see whether or not its results are physical. Later on you can use your, say Mathematica SSFM code check the results of more sophisticated code you might end up building, say, a Crank Nicolson code along the lines of Kyle Kanos's answer.
Error Bounds
The Dynkin formula realisation of the Baker-Campbell-Hausdorff Theorem:
exp(DΔt)exp(V)Δt)=exp((D+V)Δt+12[D,V]Δt2+⋯)
converging for some
Δt>0 shows that the method is accurate to second order and can show that:
exp(DΔt)exp(V)Δt)exp(−12[D,V]Δt2)=exp((D+V)Δt+O(Δt3))
You can, in theory, therefore use the term exp(V)Δt)exp(−12[D,V]Δt2) to estimate the error and set your Δt accordingly. This is not as easy as it looks and in practice bounds end up being instead rough estimates of the error. The problem is that:
Δt22[D,V]=−iΔt22m(∂2xV(x,t)+2∂xV(x,t)∂x)
and there are no readily transformed to co-ordinates wherein [D,V] is a simple multiplication operator. So you have to be content with exp(−12[D,V]Δt2)≈e−iφΔt2(id−(12[D,V]−iφ(t))Δt2) and use this to estimate your error, by working out (id−(12[D,V]−iφ(t))Δt2)ψ for your currently evolving solution ψ(x,t) and using this to set your Δt on-the-fly after each cycle of the algorithm. You can of course make these ideas the basis for an adaptive stepsize controller for your simulation. Here φ is a global phase pulled out of the dataset to minimise the norm of (12[D,V]−iφ(t))Δt2; you can of course often throw such a global phase out: depending on what you're doing with the simulation results often we're not bothered by a constant phase global exp(∫φdt).
A relevant paper about errors in the SSFM/BPM is:
Lars Thylén. "The Beam Propagation Method: An Analysis of its Applicability", Optical and Quantum Electronics 15 (1983) pp433-439.
Lars Thylén thinks about the errors in non-Lie theoretic terms (Lie groups are my bent, so I like to look for interpretations of them) but his ideas are essentially the same as the above.