However the big trick in it is to find the optimal alpha used in the integration. A suboptimal alpha will often lead to high inaccuracy, because of some strong oscillations that will appear in the integration. So the method is robust only if the root finding (for the optimal alpha) is robust.
The original paper looks the Ricatti equation for B where B is the following term in the characteristic function:
ϕ(u)=eiuf+A(u,t)+B(u,t)σ0
The solution defines the αmax where the characteristic function explodes. While the Ricatti equation is complex but not complicated:
dB/dt=ˆα(u)−β(u)B+γB2
I initially did not understand its role (to compute αmax), so that, later, one can compute alpha_optimal with a good bracketing. The bracketing is particularly important to use a decent solver, like the Brent solver. Otherwise, one is left with, mostly, Newton's method. It turns out that I explored a reduced function, which is quite simpler than the Ricatti and seems to work in all the cases I have found/tried: solve 1/B=0
If B explodes, ϕ will explode. The trick, like when solving the Ricatti equation, is to have either a good starting point (for Newton) or, better, a bracketing. It turns out that Lord and Kahl give a bracketing for 1/B, even if they don't present it like this: their τD+ on page 10 for the lower bracket, and τ+ for the upper bracket. τ+ will make 1/B explode, exactly. One could also find the next periods by adding 4π/t instead of 2π/t like they do to move from τD+ to τ+. But this does not have much interest as we don't want to go past the first explosion.
It's quite interesting to see that my simple approach is actually closely related to the more involved Ricatti approach. The starting point could be the same. Although it is much more robust to just use Brent solver on the bracketed max. I actually believe that the Ricatti equation explodes at the same points, except, maybe for some rare combination of Heston parameters.
From a coding perspective, I found that Apache commons maths was a decent library to do complex calculus or solve/minimize functions. The complex part was better than some in-house implementation: for example the square root was more precise in commons maths, and the solvers are robust. It even made me think that it is often a mistake to reinvent to wheel. It's good to choose the best implementations/algorithms as possible. But reinventing a Brent solver??? a linear interpolator??? Also the commons maths library imposes a good structure. In house stuff tends to be messy (not real interfaces, or many different ones). I believe the right approach is to use and embrace/extends Apache commons maths. If some algorithms are badly coded/not performing well, then write your own using the same kind of interfaces as commons maths (or some other good maths library).
The next part of this series on Lord-Kahl method is here.
No comments :
Post a Comment