At a presentation of the Thalesians, Hagan has presented a new PDE based approach to compute arbitrage free prices under SABR. This is similar in spirit as Andreasen-Huge, but the PDE is directly on the density, not on the prices, and there is no one-step procedure: it's just like a regular PDE with proper boundary conditions.
I was wondering how it compared to Andreasen Huge results.
My first implementation was quite slow. I postulated it was likely the Math.pow function calls. It turns out they could be reduced a great deal. As a result, it's now quite fast. But it would still be much slower than Andreasen Huge. Typically, one might use 40 time steps, while Andreasen Huge is 1, so it could be around a 40 to 1 ratio. In practice it's likely to be less than 10x slower, but still.
While looking at the implied volatilities I found something intriguing with Andreasen Huge: the implied volatilities from the refined solution using the corrected forward volatility look further away from the Hagan implied volatilitilies than without adjustment, and it's quite pronounced at the money.
Interestingly, the authors don't plot that graph in their paper. They plot a similar graph of their own closed form analytic formula, that is in reality used to compute the forward volatility. I suppose that because they calibrate and price through their method, they don't really care so much that the ATM prices don't match Hagan original formula.
We can see something else on that graph: Hagan PDE boundary is not as nice as Andreasen Huge boundary for high strikes (they use a Hagan like approx at the boundaries, this is why it crosses the Hagan implied volatilities there).
If we use a simple option gamma = 0 boundary in Andreasen Huge, this results in a very similar shape as the Hagan PDE. This is because the option price is effectively 0 at the boundary.
Hagan chose a specifically taylored Crank-Nicolson scheme. I was wondering how it fared when I reduced the number of time-steps.
The answer is: not good. This is the typical Crank-Nicolson issue. It could be interesting to adapt the method to use Lawson-Morris-Goubet or TR-BDF2, or a simple Euler Richardson extrapolation. This would allow to use less time steps, as in practice, the accuracy is not so bad with 10 time steps only.
What I like about the Hagan PDE approach is that the implied vols and the probability density converge well to the standard Hagan formula, when there is no negative density problem, for example for shorter maturities. This is better than Andreasen Huge, where there seems to be always 1 vol point difference. However their method is quite slow compared to the original simple analytic formula.
Update March 2014 - I have now a paper around this "Finite Difference Techniques for Arbitrage Free SABR"
Tuesday, May 28, 2013
SABR with the new Hagan PDE Approach
SABR with the new Hagan PDE Approach
At a presentation of the Thalesians, Hagan has presented a new PDE based approach to compute arbitrage free prices under SABR. This is similar in spirit as Andreasen-Huge, but the PDE is directly on the density, not on the prices, and there is no one-step procedure: it's just like a regular PDE with proper boundary conditions.
I was wondering how it compared to Andreasen Huge results.
My first implementation was quite slow. I postulated it was likely the Math.pow function calls. It turns out they could be reduced a great deal. As a result, it's now quite fast. But it would still be much slower than Andreasen Huge. Typically, one might use 40 time steps, while Andreasen Huge is 1, so it could be around a 40 to 1 ratio. In practice it's likely to be less than 10x slower, but still.
While looking at the implied volatilities I found something intriguing with Andreasen Huge: the implied volatilities from the refined solution using the corrected forward volatility look further away from the Hagan implied volatilitilies than without adjustment, and it's quite pronounced at the money.
Interestingly, the authors don't plot that graph in their paper. They plot a similar graph of their own closed form analytic formula, that is in reality used to compute the forward volatility. I suppose that because they calibrate and price through their method, they don't really care so much that the ATM prices don't match Hagan original formula.
We can see something else on that graph: Hagan PDE boundary is not as nice as Andreasen Huge boundary for high strikes (they use a Hagan like approx at the boundaries, this is why it crosses the Hagan implied volatilities there).
If we use a simple option gamma = 0 boundary in Andreasen Huge, this results in a very similar shape as the Hagan PDE. This is because the option price is effectively 0 at the boundary.
Hagan chose a specifically taylored Crank-Nicolson scheme. I was wondering how it fared when I reduced the number of time-steps.
The answer is: not good. This is the typical Crank-Nicolson issue. It could be interesting to adapt the method to use Lawson-Morris-Goubet or TR-BDF2, or a simple Euler Richardson extrapolation. This would allow to use less time steps, as in practice, the accuracy is not so bad with 10 time steps only.
What I like about the Hagan PDE approach is that the implied vols and the probability density converge well to the standard Hagan formula, when there is no negative density problem, for example for shorter maturities. This is better than Andreasen Huge, where there seems to be always 1 vol point difference. However their method is quite slow compared to the original simple analytic formula.
Update March 2014 - I have now a paper around this "Finite Difference Techniques for Arbitrage Free SABR"
I was wondering how it compared to Andreasen Huge results.
My first implementation was quite slow. I postulated it was likely the Math.pow function calls. It turns out they could be reduced a great deal. As a result, it's now quite fast. But it would still be much slower than Andreasen Huge. Typically, one might use 40 time steps, while Andreasen Huge is 1, so it could be around a 40 to 1 ratio. In practice it's likely to be less than 10x slower, but still.
While looking at the implied volatilities I found something intriguing with Andreasen Huge: the implied volatilities from the refined solution using the corrected forward volatility look further away from the Hagan implied volatilitilies than without adjustment, and it's quite pronounced at the money.
Interestingly, the authors don't plot that graph in their paper. They plot a similar graph of their own closed form analytic formula, that is in reality used to compute the forward volatility. I suppose that because they calibrate and price through their method, they don't really care so much that the ATM prices don't match Hagan original formula.
We can see something else on that graph: Hagan PDE boundary is not as nice as Andreasen Huge boundary for high strikes (they use a Hagan like approx at the boundaries, this is why it crosses the Hagan implied volatilities there).
If we use a simple option gamma = 0 boundary in Andreasen Huge, this results in a very similar shape as the Hagan PDE. This is because the option price is effectively 0 at the boundary.
Hagan chose a specifically taylored Crank-Nicolson scheme. I was wondering how it fared when I reduced the number of time-steps.
The answer is: not good. This is the typical Crank-Nicolson issue. It could be interesting to adapt the method to use Lawson-Morris-Goubet or TR-BDF2, or a simple Euler Richardson extrapolation. This would allow to use less time steps, as in practice, the accuracy is not so bad with 10 time steps only.
What I like about the Hagan PDE approach is that the implied vols and the probability density converge well to the standard Hagan formula, when there is no negative density problem, for example for shorter maturities. This is better than Andreasen Huge, where there seems to be always 1 vol point difference. However their method is quite slow compared to the original simple analytic formula.
Update March 2014 - I have now a paper around this "Finite Difference Techniques for Arbitrage Free SABR"
Friday, May 24, 2013
SABR with Andreasen-Huge
I am on holiday today. Unfortunately I am still thinking about work-related matters, and out of curiosity, wanted to do a little experiment. I know it is not very good to spend free time on work related stuff: there is no reward for it, and there is so much more to life. Hopefully it will be over after this post.
Around 2 years ago, I saw a presentation from Andreasen and Huge about how they were able to price/calibrate SABR by a one-step finite difference technique. At that time, I did not understand much their idea. My mind was too focused on more classical finite differences techniques and not enough on the big picture in their idea. Their idea is quite general and can be applied to much more than SABR.
Recently there has been some talk and development going on where I work about SABR (a popular way to interpolate the option implied volatility surface for interest rate derivatives), especially regarding the implied volatility wings at low strike, and sometimes on how to price in a negative rates environment. There are actually quite a bit of research papers around this. I am not really working on that part so I just mostly listened. Then a former coworker suggested that the Andreasen Huge method was actually what banks seemed to choose in practice. A few weeks later, the Thalesians (a group for people interested in quantitative finance) announced a presentation by Hagan (one of the inventor of SABR) about a technique that sounded very much like Andreasen-Huge to deal with the initial SABR issues in low rates.
As the people working on this did not investigate Andreasen-Huge technique, I somehow felt that I had to and that maybe, this time, I would be able to grasp their idea.
It took me just a few hours to have meaningful results. Here is the price of out of the money vanilla options using alpha = 0.0758194, nu = 0.1, beta = 0.5, rho = -0.1, forward = 0.02, and a maturity of 2 years.
I did not have in my home library a way to find the implied volatility for a given price. I knew of 2 existing methods, Jaeckel "By Implication", and Li rational functions approach. I discovered that Li wrote a new paper on the subject where he uses a SOR method to find the implied volatility and claims it's very accurate, very fast and very robust. Furthermore, the same idea can be applied to normal implied volatility. What attracted me to it is the simplicity of the underlying algorithm. Jaeckel's way is a nice way to do Newton-Raphson, but there seems to be so many things to "prepare" to make it work in most cases, that I felt it would be too much work for my experiment. It took me a few more hours to code Li SOR solvers, but it worked amazingly well for my experiment.
At first I had an error in my boundary condition and had no so good results especially with a long maturity. The traps with Andreasen-Huge technique are very much the same as with classical finite differences: be careful to place the strike on the grid (eventually smooth it), and have good boundaries.
Around 2 years ago, I saw a presentation from Andreasen and Huge about how they were able to price/calibrate SABR by a one-step finite difference technique. At that time, I did not understand much their idea. My mind was too focused on more classical finite differences techniques and not enough on the big picture in their idea. Their idea is quite general and can be applied to much more than SABR.
Recently there has been some talk and development going on where I work about SABR (a popular way to interpolate the option implied volatility surface for interest rate derivatives), especially regarding the implied volatility wings at low strike, and sometimes on how to price in a negative rates environment. There are actually quite a bit of research papers around this. I am not really working on that part so I just mostly listened. Then a former coworker suggested that the Andreasen Huge method was actually what banks seemed to choose in practice. A few weeks later, the Thalesians (a group for people interested in quantitative finance) announced a presentation by Hagan (one of the inventor of SABR) about a technique that sounded very much like Andreasen-Huge to deal with the initial SABR issues in low rates.
As the people working on this did not investigate Andreasen-Huge technique, I somehow felt that I had to and that maybe, this time, I would be able to grasp their idea.
It took me just a few hours to have meaningful results. Here is the price of out of the money vanilla options using alpha = 0.0758194, nu = 0.1, beta = 0.5, rho = -0.1, forward = 0.02, and a maturity of 2 years.
I did not have in my home library a way to find the implied volatility for a given price. I knew of 2 existing methods, Jaeckel "By Implication", and Li rational functions approach. I discovered that Li wrote a new paper on the subject where he uses a SOR method to find the implied volatility and claims it's very accurate, very fast and very robust. Furthermore, the same idea can be applied to normal implied volatility. What attracted me to it is the simplicity of the underlying algorithm. Jaeckel's way is a nice way to do Newton-Raphson, but there seems to be so many things to "prepare" to make it work in most cases, that I felt it would be too much work for my experiment. It took me a few more hours to code Li SOR solvers, but it worked amazingly well for my experiment.
At first I had an error in my boundary condition and had no so good results especially with a long maturity. The traps with Andreasen-Huge technique are very much the same as with classical finite differences: be careful to place the strike on the grid (eventually smooth it), and have good boundaries.
SABR with Andreasen-Huge
I am on holiday today. Unfortunately I am still thinking about work-related matters, and out of curiosity, wanted to do a little experiment. I know it is not very good to spend free time on work related stuff: there is no reward for it, and there is so much more to life. Hopefully it will be over after this post.
Around 2 years ago, I saw a presentation from Andreasen and Huge about how they were able to price/calibrate SABR by a one-step finite difference technique. At that time, I did not understand much their idea. My mind was too focused on more classical finite differences techniques and not enough on the big picture in their idea. Their idea is quite general and can be applied to much more than SABR.
Recently there has been some talk and development going on where I work about SABR (a popular way to interpolate the option implied volatility surface for interest rate derivatives), especially regarding the implied volatility wings at low strike, and sometimes on how to price in a negative rates environment. There are actually quite a bit of research papers around this. I am not really working on that part so I just mostly listened. Then a former coworker suggested that the Andreasen Huge method was actually what banks seemed to choose in practice. A few weeks later, the Thalesians (a group for people interested in quantitative finance) announced a presentation by Hagan (one of the inventor of SABR) about a technique that sounded very much like Andreasen-Huge to deal with the initial SABR issues in low rates.
As the people working on this did not investigate Andreasen-Huge technique, I somehow felt that I had to and that maybe, this time, I would be able to grasp their idea.
It took me just a few hours to have meaningful results. Here is the price of out of the money vanilla options using alpha = 0.0758194, nu = 0.1, beta = 0.5, rho = -0.1, forward = 0.02, and a maturity of 2 years.
I did not have in my home library a way to find the implied volatility for a given price. I knew of 2 existing methods, Jaeckel "By Implication", and Li rational functions approach. I discovered that Li wrote a new paper on the subject where he uses a SOR method to find the implied volatility and claims it's very accurate, very fast and very robust. Furthermore, the same idea can be applied to normal implied volatility. What attracted me to it is the simplicity of the underlying algorithm. Jaeckel's way is a nice way to do Newton-Raphson, but there seems to be so many things to "prepare" to make it work in most cases, that I felt it would be too much work for my experiment. It took me a few more hours to code Li SOR solvers, but it worked amazingly well for my experiment.
At first I had an error in my boundary condition and had no so good results especially with a long maturity. The traps with Andreasen-Huge technique are very much the same as with classical finite differences: be careful to place the strike on the grid (eventually smooth it), and have good boundaries.
Around 2 years ago, I saw a presentation from Andreasen and Huge about how they were able to price/calibrate SABR by a one-step finite difference technique. At that time, I did not understand much their idea. My mind was too focused on more classical finite differences techniques and not enough on the big picture in their idea. Their idea is quite general and can be applied to much more than SABR.
Recently there has been some talk and development going on where I work about SABR (a popular way to interpolate the option implied volatility surface for interest rate derivatives), especially regarding the implied volatility wings at low strike, and sometimes on how to price in a negative rates environment. There are actually quite a bit of research papers around this. I am not really working on that part so I just mostly listened. Then a former coworker suggested that the Andreasen Huge method was actually what banks seemed to choose in practice. A few weeks later, the Thalesians (a group for people interested in quantitative finance) announced a presentation by Hagan (one of the inventor of SABR) about a technique that sounded very much like Andreasen-Huge to deal with the initial SABR issues in low rates.
As the people working on this did not investigate Andreasen-Huge technique, I somehow felt that I had to and that maybe, this time, I would be able to grasp their idea.
It took me just a few hours to have meaningful results. Here is the price of out of the money vanilla options using alpha = 0.0758194, nu = 0.1, beta = 0.5, rho = -0.1, forward = 0.02, and a maturity of 2 years.
I did not have in my home library a way to find the implied volatility for a given price. I knew of 2 existing methods, Jaeckel "By Implication", and Li rational functions approach. I discovered that Li wrote a new paper on the subject where he uses a SOR method to find the implied volatility and claims it's very accurate, very fast and very robust. Furthermore, the same idea can be applied to normal implied volatility. What attracted me to it is the simplicity of the underlying algorithm. Jaeckel's way is a nice way to do Newton-Raphson, but there seems to be so many things to "prepare" to make it work in most cases, that I felt it would be too much work for my experiment. It took me a few more hours to code Li SOR solvers, but it worked amazingly well for my experiment.
At first I had an error in my boundary condition and had no so good results especially with a long maturity. The traps with Andreasen-Huge technique are very much the same as with classical finite differences: be careful to place the strike on the grid (eventually smooth it), and have good boundaries.
Friday, May 17, 2013
Large Steps in Schobel-Zhu/Heston the Lazy Way
Van Haastrecht, Lord and Pelsser present an effective way to price derivatives by Monte-Carlo under the Schobel-Zhu model (as well as under the Schobel-Zhu-Hull-White model). It's quite similar to Andersen QE scheme for Heston in spirit.
In their paper they evolve the (log) asset process together with the volatility process, using the same discretization times. A while ago, when looking at Joshi and Chan large steps for Heston, I noticed that, inspired by Broadie-Kaya exact Heston scheme, they present the idea to evolve the variance process using small steps and the asset process using large steps (depending on the payoff) using the integrated variance value computed by small steps. The asset steps correspond to payoff evaluation dates At that time I had applied this idea to Andersen QE scheme and it worked reasonably well.
So I tried to apply the same logic to Schobel Zhu, and my first tests show that it works too. Interestingly, the speed gain is about 2x. Here are the results for a vanilla call option of different strikes.
I would have expected the difference in performance to increase when the step size is decreasing, but it's not the case on my computer.
It's not truly large steps like Joshi and Chan do in their integrated double gamma scheme as the variance is still discretized in relatively small steps in my case, but it seems like a good, relatively simple optimization. A while ago, I did also implement the full Joshi and Chan scheme, but it's really interesting if one is always looking for long steps: it is horribly slow when the step size is small, which might occur for many exotic payoffs, while Andersen QE scheme perform almost as well as log-Euler in terms of computational cost.
In their paper they evolve the (log) asset process together with the volatility process, using the same discretization times. A while ago, when looking at Joshi and Chan large steps for Heston, I noticed that, inspired by Broadie-Kaya exact Heston scheme, they present the idea to evolve the variance process using small steps and the asset process using large steps (depending on the payoff) using the integrated variance value computed by small steps. The asset steps correspond to payoff evaluation dates At that time I had applied this idea to Andersen QE scheme and it worked reasonably well.
So I tried to apply the same logic to Schobel Zhu, and my first tests show that it works too. Interestingly, the speed gain is about 2x. Here are the results for a vanilla call option of different strikes.
Similar Error between long and short asset steps |
Long steps take around 1/2 the time to compute |
It's not truly large steps like Joshi and Chan do in their integrated double gamma scheme as the variance is still discretized in relatively small steps in my case, but it seems like a good, relatively simple optimization. A while ago, I did also implement the full Joshi and Chan scheme, but it's really interesting if one is always looking for long steps: it is horribly slow when the step size is small, which might occur for many exotic payoffs, while Andersen QE scheme perform almost as well as log-Euler in terms of computational cost.
Large Steps in Schobel-Zhu/Heston the Lazy Way
Van Haastrecht, Lord and Pelsser present an effective way to price derivatives by Monte-Carlo under the Schobel-Zhu model (as well as under the Schobel-Zhu-Hull-White model). It's quite similar to Andersen QE scheme for Heston in spirit.
In their paper they evolve the (log) asset process together with the volatility process, using the same discretization times. A while ago, when looking at Joshi and Chan large steps for Heston, I noticed that, inspired by Broadie-Kaya exact Heston scheme, they present the idea to evolve the variance process using small steps and the asset process using large steps (depending on the payoff) using the integrated variance value computed by small steps. The asset steps correspond to payoff evaluation dates At that time I had applied this idea to Andersen QE scheme and it worked reasonably well.
So I tried to apply the same logic to Schobel Zhu, and my first tests show that it works too. Interestingly, the speed gain is about 2x. Here are the results for a vanilla call option of different strikes.
I would have expected the difference in performance to increase when the step size is decreasing, but it's not the case on my computer.
It's not truly large steps like Joshi and Chan do in their integrated double gamma scheme as the variance is still discretized in relatively small steps in my case, but it seems like a good, relatively simple optimization. A while ago, I did also implement the full Joshi and Chan scheme, but it's really interesting if one is always looking for long steps: it is horribly slow when the step size is small, which might occur for many exotic payoffs, while Andersen QE scheme perform almost as well as log-Euler in terms of computational cost.
In their paper they evolve the (log) asset process together with the volatility process, using the same discretization times. A while ago, when looking at Joshi and Chan large steps for Heston, I noticed that, inspired by Broadie-Kaya exact Heston scheme, they present the idea to evolve the variance process using small steps and the asset process using large steps (depending on the payoff) using the integrated variance value computed by small steps. The asset steps correspond to payoff evaluation dates At that time I had applied this idea to Andersen QE scheme and it worked reasonably well.
So I tried to apply the same logic to Schobel Zhu, and my first tests show that it works too. Interestingly, the speed gain is about 2x. Here are the results for a vanilla call option of different strikes.
Similar Error between long and short asset steps |
Long steps take around 1/2 the time to compute |
It's not truly large steps like Joshi and Chan do in their integrated double gamma scheme as the variance is still discretized in relatively small steps in my case, but it seems like a good, relatively simple optimization. A while ago, I did also implement the full Joshi and Chan scheme, but it's really interesting if one is always looking for long steps: it is horribly slow when the step size is small, which might occur for many exotic payoffs, while Andersen QE scheme perform almost as well as log-Euler in terms of computational cost.
Monday, May 13, 2013
Exact Forward in Monte-Carlo
Where I work, there used to be quite a bit of a confusion on which rates one should use as input to a Local Volatility Monte-Carlo simulation.
In particular there is a paper in the Journal of Computation Finance by Andersen and Ratcliffe "The Equity Option Volatility Smile: a Finite Difference Approach" which explains one should use specially tailored rates for the finite difference scheme in order to reproduce exact Bond price and exact Forward contract prices.
Code has been updated and roll-backed, people have complained around it. But nobody really made the effort to simply write clearly what's going on, or even write a unit test around it. So it was just FUD, until this paper.
In short, for log-Euler, one can use the intuitive forward drift rate: r1*t1-r0*t0 (ratio of discount factors), but for Euler, one need to use a less intuitive forward drift rate to reproduce a nearly exact forward price.
In particular there is a paper in the Journal of Computation Finance by Andersen and Ratcliffe "The Equity Option Volatility Smile: a Finite Difference Approach" which explains one should use specially tailored rates for the finite difference scheme in order to reproduce exact Bond price and exact Forward contract prices.
Code has been updated and roll-backed, people have complained around it. But nobody really made the effort to simply write clearly what's going on, or even write a unit test around it. So it was just FUD, until this paper.
In short, for log-Euler, one can use the intuitive forward drift rate: r1*t1-r0*t0 (ratio of discount factors), but for Euler, one need to use a less intuitive forward drift rate to reproduce a nearly exact forward price.
Exact Forward in Monte-Carlo
Where I work, there used to be quite a bit of a confusion on which rates one should use as input to a Local Volatility Monte-Carlo simulation.
In particular there is a paper in the Journal of Computation Finance by Andersen and Ratcliffe "The Equity Option Volatility Smile: a Finite Difference Approach" which explains one should use specially tailored rates for the finite difference scheme in order to reproduce exact Bond price and exact Forward contract prices.
Code has been updated and roll-backed, people have complained around it. But nobody really made the effort to simply write clearly what's going on, or even write a unit test around it. So it was just FUD, until this paper.
In short, for log-Euler, one can use the intuitive forward drift rate: r1*t1-r0*t0 (ratio of discount factors), but for Euler, one need to use a less intuitive forward drift rate to reproduce a nearly exact forward price.
In particular there is a paper in the Journal of Computation Finance by Andersen and Ratcliffe "The Equity Option Volatility Smile: a Finite Difference Approach" which explains one should use specially tailored rates for the finite difference scheme in order to reproduce exact Bond price and exact Forward contract prices.
Code has been updated and roll-backed, people have complained around it. But nobody really made the effort to simply write clearly what's going on, or even write a unit test around it. So it was just FUD, until this paper.
In short, for log-Euler, one can use the intuitive forward drift rate: r1*t1-r0*t0 (ratio of discount factors), but for Euler, one need to use a less intuitive forward drift rate to reproduce a nearly exact forward price.
Quasi Monte Carlo in Finance
I have been wondering if there was any better alternative than the standard Sobol (+ Brownian Bridge) quasi random sequence generator for the Monte Carlo simulations of finance derivatives.
Here is what I found:
Here is what I found:
- Scrambled Sobol. The idea is to rerandomize the quasi random numbers slightly. It can provide better uniformity properties and allows for a real estimate of the standard error. There are many ways to do that. The simple Cranley Patterson rotation consisting in adding a pseudo random number modulo 1, Owen scrambling (permutations of the digits) and simplifications of it to achieve a reasonable speed. This is all very well described in Owen Quasi Monte Carlo document
- Lattice rules. It is another form of quasi random sequences, which so far was not very well adapted to finance problems. A presentation from Giles & Kuo look like it's changing.
- Fast PCA. An alternative to Brownian Bridge is the standard PCA. The problem with PCA is the performance in O(n^2). A possible speedup is possible in the case of a equidistant time steps. This paper shows it can be generalized. But the data in it shows it is only advantageous for more than 1024 steps - not so interesting in Finance.
Quasi Monte Carlo in Finance
I have been wondering if there was any better alternative than the standard Sobol (+ Brownian Bridge) quasi random sequence generator for the Monte Carlo simulations of finance derivatives.
Here is what I found:
Here is what I found:
- Scrambled Sobol. The idea is to rerandomize the quasi random numbers slightly. It can provide better uniformity properties and allows for a real estimate of the standard error. There are many ways to do that. The simple Cranley Patterson rotation consisting in adding a pseudo random number modulo 1, Owen scrambling (permutations of the digits) and simplifications of it to achieve a reasonable speed. This is all very well described in Owen Quasi Monte Carlo document
- Lattice rules. It is another form of quasi random sequences, which so far was not very well adapted to finance problems. A presentation from Giles & Kuo look like it's changing.
- Fast PCA. An alternative to Brownian Bridge is the standard PCA. The problem with PCA is the performance in O(n^2). A possible speedup is possible in the case of a equidistant time steps. This paper shows it can be generalized. But the data in it shows it is only advantageous for more than 1024 steps - not so interesting in Finance.
Tuesday, May 07, 2013
Time Estimates in Software Development
Recently, that I completed a project that I had initially estimated to around 2 months, in nearly 4 hours. This morning I fixed the few remaining bugs. I looked at the clock, surprised it was still so early and I still had so many hours left in the day.
Now I have more time to polish the details and go beyond the initial goal (I think this scares my manager a bit), but I could (and I believe some people do this often) stop now and all the management would be satisfied.
What's interesting is that everybody bought the 2 months estimate without questions (I almost even believed it myself). This reminded me of my productivity zero post.
Now I have more time to polish the details and go beyond the initial goal (I think this scares my manager a bit), but I could (and I believe some people do this often) stop now and all the management would be satisfied.
What's interesting is that everybody bought the 2 months estimate without questions (I almost even believed it myself). This reminded me of my productivity zero post.
Time Estimates in Software Development
Recently, that I completed a project that I had initially estimated to around 2 months, in nearly 4 hours. This morning I fixed the few remaining bugs. I looked at the clock, surprised it was still so early and I still had so many hours left in the day.
Now I have more time to polish the details and go beyond the initial goal (I think this scares my manager a bit), but I could (and I believe some people do this often) stop now and all the management would be satisfied.
What's interesting is that everybody bought the 2 months estimate without questions (I almost even believed it myself). This reminded me of my productivity zero post.
Now I have more time to polish the details and go beyond the initial goal (I think this scares my manager a bit), but I could (and I believe some people do this often) stop now and all the management would be satisfied.
What's interesting is that everybody bought the 2 months estimate without questions (I almost even believed it myself). This reminded me of my productivity zero post.
Subscribe to:
Posts
(
Atom
)