I discovered that suddenly emails sent to me bounced back yesterday. I logged in my godaddy account and to my surprise saw that I did not own any domain name anymore. I looked at my emails to see if I had received a warning as is usually the case when your domain is about to expire. There was none recent, the most recent was from may 2011, the last time I had renewed my domain.

I then tried to buy again the same domain name only to discover it was already taken! The whois record indicated a day old registration through godaddy itself.

It's no coincidence that godaddy sells for 3 times the price the possibility to try to take over a domain as soon as it will expire. I find particularly dishonest that in this case they fail to warn their own customers that their domain is about to expire. As a result of this policy, someone else will take over the domain through them for a much higher price. A conflict of interest.

From now on I will not register a domain through a registrar that offers the service to snatch up a domain.

## Monday, June 24, 2013

### Godaddy sold my domain name

I discovered that suddenly emails sent to me bounced back yesterday. I logged in my godaddy account and to my surprise saw that I did not own any domain name anymore. I looked at my emails to see if I had received a warning as is usually the case when your domain is about to expire. There was none recent, the most recent was from may 2011, the last time I had renewed my domain.

I then tried to buy again the same domain name only to discover it was already taken! The whois record indicated a day old registration through godaddy itself.

It's no coincidence that godaddy sells for 3 times the price the possibility to try to take over a domain as soon as it will expire. I find particularly dishonest that in this case they fail to warn their own customers that their domain is about to expire. As a result of this policy, someone else will take over the domain through them for a much higher price. A conflict of interest.

From now on I will not register a domain through a registrar that offers the service to snatch up a domain.

I then tried to buy again the same domain name only to discover it was already taken! The whois record indicated a day old registration through godaddy itself.

It's no coincidence that godaddy sells for 3 times the price the possibility to try to take over a domain as soon as it will expire. I find particularly dishonest that in this case they fail to warn their own customers that their domain is about to expire. As a result of this policy, someone else will take over the domain through them for a much higher price. A conflict of interest.

From now on I will not register a domain through a registrar that offers the service to snatch up a domain.

## Wednesday, June 19, 2013

### Scala Build Tool : SBT

It's been a while since I do a pet project in Scala, and today, after many trials before, I decided to give another go at Jetbrain Idea for Scala development, as Eclipse with the Scala plugin tended to crash a little bit too often for my taste (resulting sometimes in loss of a few lines of code). I could have just probably updated eclipse and the scala plugin, mine were not very old, but not the latest.

But it was just an opportunity to try Idea. I somehow always failed before to setup properly the scala support in Idea while it seemed to just work in Eclipse. I had difficulties making it find my scala compiler. After some google searches, I found that SBT, the scala build tool could create automatically a scala project for Idea (a hint to make it work with a project under Scala 2.10 is to put the plugins.sbt file in ~/.sbt/plugins).

It was reasonably easy to create a simple build.sbt file for my project. I added some dependencies (it handles them like Ivy, from Maven repositories), and was pleased to find you could also just put your jars in lib directory if you did not want/could not find some maven repository.

The tool is quick to launch, does not get in the way. So far the experience has been much much nicer than Gradle that we use now at work, which I find painfully slow to start, check dependencies, and extremely complicated to customize to your needs. It's also nicer than Maven, which I always found painful as soon as one wanted a small specific behaviour.

But it was just an opportunity to try Idea. I somehow always failed before to setup properly the scala support in Idea while it seemed to just work in Eclipse. I had difficulties making it find my scala compiler. After some google searches, I found that SBT, the scala build tool could create automatically a scala project for Idea (a hint to make it work with a project under Scala 2.10 is to put the plugins.sbt file in ~/.sbt/plugins).

It was reasonably easy to create a simple build.sbt file for my project. I added some dependencies (it handles them like Ivy, from Maven repositories), and was pleased to find you could also just put your jars in lib directory if you did not want/could not find some maven repository.

The tool is quick to launch, does not get in the way. So far the experience has been much much nicer than Gradle that we use now at work, which I find painfully slow to start, check dependencies, and extremely complicated to customize to your needs. It's also nicer than Maven, which I always found painful as soon as one wanted a small specific behaviour.

### Scala Build Tool : SBT

It's been a while since I do a pet project in Scala, and today, after many trials before, I decided to give another go at Jetbrain Idea for Scala development, as Eclipse with the Scala plugin tended to crash a little bit too often for my taste (resulting sometimes in loss of a few lines of code). I could have just probably updated eclipse and the scala plugin, mine were not very old, but not the latest.

But it was just an opportunity to try Idea. I somehow always failed before to setup properly the scala support in Idea while it seemed to just work in Eclipse. I had difficulties making it find my scala compiler. After some google searches, I found that SBT, the scala build tool could create automatically a scala project for Idea (a hint to make it work with a project under Scala 2.10 is to put the plugins.sbt file in ~/.sbt/plugins).

It was reasonably easy to create a simple build.sbt file for my project. I added some dependencies (it handles them like Ivy, from Maven repositories), and was pleased to find you could also just put your jars in lib directory if you did not want/could not find some maven repository.

The tool is quick to launch, does not get in the way. So far the experience has been much much nicer than Gradle that we use now at work, which I find painfully slow to start, check dependencies, and extremely complicated to customize to your needs. It's also nicer than Maven, which I always found painful as soon as one wanted a small specific behaviour.

But it was just an opportunity to try Idea. I somehow always failed before to setup properly the scala support in Idea while it seemed to just work in Eclipse. I had difficulties making it find my scala compiler. After some google searches, I found that SBT, the scala build tool could create automatically a scala project for Idea (a hint to make it work with a project under Scala 2.10 is to put the plugins.sbt file in ~/.sbt/plugins).

It was reasonably easy to create a simple build.sbt file for my project. I added some dependencies (it handles them like Ivy, from Maven repositories), and was pleased to find you could also just put your jars in lib directory if you did not want/could not find some maven repository.

The tool is quick to launch, does not get in the way. So far the experience has been much much nicer than Gradle that we use now at work, which I find painfully slow to start, check dependencies, and extremely complicated to customize to your needs. It's also nicer than Maven, which I always found painful as soon as one wanted a small specific behaviour.

## Tuesday, June 18, 2013

### The Finite Difference Theta Scheme Optimal Theta

The theta finite difference scheme is a common generalization of Crank-Nicolson. In finance, the book from Wilmott, a paper from A. Sepp, one from Andersen-Ratcliffe present it. Most of the time, it's just a convenient way to handle implicit \(\theta=1\), explicit \(\theta=0\) and Crank-Nicolson \(\theta=0.5\) with the same algorithm.

Wilmott makes an interesting remark: one can choose a theta that will cancel out higher order terms in the local truncation error and therefore should lead to increased accuracy.

$$\theta = \frac{1}{2}- \frac{(\Delta x)^2}{12 b \Delta t} $$

where \(b\) is the diffusion coefficient.

This leads to \(\theta < \frac{1}{2}\), which means the scheme is not unconditionally stable anymore but needs to obey (see Morton & Mayers p 30):

$$b \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}$$.

and to ensure that \(\theta \geq 0 \):

$$b \frac{\Delta t}{(\Delta x)^2} \geq \frac{1}{6}$$

Crank-Nicolson has a similar requirement to ensure the absence of oscillations given non smooth initial value, but because it is unconditionality stable, the condition is actually much weaker if \(b\) depends on \(x\). Crank-Nicolson will be oscillation free if \(b(x_{j0}) \frac{\Delta t}{(\Delta x)^2} < 1\) where \(j0\) is the index of the discontinuity, while the theta scheme needs to be stable, that is \(\max(b) \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}\)

This is a much stricter condition if \(b\) varies a lot, as it is the case for the arbitrage free SABR PDE. where \(\max(b) > 200 b_{j0}\)

The advantages of such a scheme are then not clear compared to a simpler explicit scheme (eventually predictor corrector), that will have a similar constraint on the ratio \( \frac{\Delta t}{(\Delta x)^2} \).

Wilmott makes an interesting remark: one can choose a theta that will cancel out higher order terms in the local truncation error and therefore should lead to increased accuracy.

$$\theta = \frac{1}{2}- \frac{(\Delta x)^2}{12 b \Delta t} $$

where \(b\) is the diffusion coefficient.

This leads to \(\theta < \frac{1}{2}\), which means the scheme is not unconditionally stable anymore but needs to obey (see Morton & Mayers p 30):

$$b \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}$$.

and to ensure that \(\theta \geq 0 \):

$$b \frac{\Delta t}{(\Delta x)^2} \geq \frac{1}{6}$$

Crank-Nicolson has a similar requirement to ensure the absence of oscillations given non smooth initial value, but because it is unconditionality stable, the condition is actually much weaker if \(b\) depends on \(x\). Crank-Nicolson will be oscillation free if \(b(x_{j0}) \frac{\Delta t}{(\Delta x)^2} < 1\) where \(j0\) is the index of the discontinuity, while the theta scheme needs to be stable, that is \(\max(b) \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}\)

This is a much stricter condition if \(b\) varies a lot, as it is the case for the arbitrage free SABR PDE. where \(\max(b) > 200 b_{j0}\)

The advantages of such a scheme are then not clear compared to a simpler explicit scheme (eventually predictor corrector), that will have a similar constraint on the ratio \( \frac{\Delta t}{(\Delta x)^2} \).

### The Finite Difference Theta Scheme Optimal Theta

The theta finite difference scheme is a common generalization of Crank-Nicolson. In finance, the book from Wilmott, a paper from A. Sepp, one from Andersen-Ratcliffe present it. Most of the time, it's just a convenient way to handle implicit \(\theta=1\), explicit \(\theta=0\) and Crank-Nicolson \(\theta=0.5\) with the same algorithm.

Wilmott makes an interesting remark: one can choose a theta that will cancel out higher order terms in the local truncation error and therefore should lead to increased accuracy.

$$\theta = \frac{1}{2}- \frac{(\Delta x)^2}{12 b \Delta t} $$

where \(b\) is the diffusion coefficient.

This leads to \(\theta < \frac{1}{2}\), which means the scheme is not unconditionally stable anymore but needs to obey (see Morton & Mayers p 30):

$$b \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}$$.

and to ensure that \(\theta \geq 0 \):

$$b \frac{\Delta t}{(\Delta x)^2} \geq \frac{1}{6}$$

Crank-Nicolson has a similar requirement to ensure the absence of oscillations given non smooth initial value, but because it is unconditionality stable, the condition is actually much weaker if \(b\) depends on \(x\). Crank-Nicolson will be oscillation free if \(b(x_{j0}) \frac{\Delta t}{(\Delta x)^2} < 1\) where \(j0\) is the index of the discontinuity, while the theta scheme needs to be stable, that is \(\max(b) \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}\)

This is a much stricter condition if \(b\) varies a lot, as it is the case for the arbitrage free SABR PDE. where \(\max(b) > 200 b_{j0}\)

The advantages of such a scheme are then not clear compared to a simpler explicit scheme (eventually predictor corrector), that will have a similar constraint on the ratio \( \frac{\Delta t}{(\Delta x)^2} \).

Wilmott makes an interesting remark: one can choose a theta that will cancel out higher order terms in the local truncation error and therefore should lead to increased accuracy.

$$\theta = \frac{1}{2}- \frac{(\Delta x)^2}{12 b \Delta t} $$

where \(b\) is the diffusion coefficient.

This leads to \(\theta < \frac{1}{2}\), which means the scheme is not unconditionally stable anymore but needs to obey (see Morton & Mayers p 30):

$$b \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}$$.

and to ensure that \(\theta \geq 0 \):

$$b \frac{\Delta t}{(\Delta x)^2} \geq \frac{1}{6}$$

Crank-Nicolson has a similar requirement to ensure the absence of oscillations given non smooth initial value, but because it is unconditionality stable, the condition is actually much weaker if \(b\) depends on \(x\). Crank-Nicolson will be oscillation free if \(b(x_{j0}) \frac{\Delta t}{(\Delta x)^2} < 1\) where \(j0\) is the index of the discontinuity, while the theta scheme needs to be stable, that is \(\max(b) \frac{\Delta t}{(\Delta x)^2} \leq \frac{5}{6}\)

This is a much stricter condition if \(b\) varies a lot, as it is the case for the arbitrage free SABR PDE. where \(\max(b) > 200 b_{j0}\)

The advantages of such a scheme are then not clear compared to a simpler explicit scheme (eventually predictor corrector), that will have a similar constraint on the ratio \( \frac{\Delta t}{(\Delta x)^2} \).

## Tuesday, June 11, 2013

### Simple "Can Scala Do This?" Questions

Today, a friend asked me if Scala could pass primitives (such as Double) by reference. It can be useful sometimes instead of creating a full blown object. In Java there is commons lang MutableDouble. It could be interesting if there was some optimized way to do that.

Then he wondered if we could use it for C#.

Later today, I tried to use the nice syntax to return multiple values from a method:

var (a,b) = mymethod(1)

I noticed you then could not do:

(a,b) = mymethod(2)

So declaring a var seems pointless in this case.

*One answer could be: it's not functional programming oriented and therefore not too surprising this is not encouraged in Scala.*Then he wondered if we could use it for C#.

*I know this used to be possible in Scala 1.0, I believe it's not anymore since 2.x. This was a cool feature, especially if they had managed to develop strong libraries around it. I think it was abandoned to focus on other things, because of lack of resources, but it's sad.*Later today, I tried to use the nice syntax to return multiple values from a method:

var (a,b) = mymethod(1)

I noticed you then could not do:

(a,b) = mymethod(2)

So declaring a var seems pointless in this case.

*One way to achieve this is to:**var tuple = mymethod(1)**var a = tuple._1**var b = tuple._2*

*This does not look so nice.*### Simple "Can Scala Do This?" Questions

Today, a friend asked me if Scala could pass primitives (such as Double) by reference. It can be useful sometimes instead of creating a full blown object. In Java there is commons lang MutableDouble. It could be interesting if there was some optimized way to do that.

Then he wondered if we could use it for C#.

Later today, I tried to use the nice syntax to return multiple values from a method:

var (a,b) = mymethod(1)

I noticed you then could not do:

(a,b) = mymethod(2)

So declaring a var seems pointless in this case.

*One answer could be: it's not functional programming oriented and therefore not too surprising this is not encouraged in Scala.*Then he wondered if we could use it for C#.

*I know this used to be possible in Scala 1.0, I believe it's not anymore since 2.x. This was a cool feature, especially if they had managed to develop strong libraries around it. I think it was abandoned to focus on other things, because of lack of resources, but it's sad.*Later today, I tried to use the nice syntax to return multiple values from a method:

var (a,b) = mymethod(1)

I noticed you then could not do:

(a,b) = mymethod(2)

So declaring a var seems pointless in this case.

*One way to achieve this is to:**var tuple = mymethod(1)**var a = tuple._1**var b = tuple._2*

*This does not look so nice.*## Monday, June 03, 2013

### Akima for Yield Curve Interpolation ?

On my test of yield curve interpolations, focusing on parallel delta versus sequential delta, Akima is the worst of the lot. I am not sure why this interpolation is still popular when most alternatives seem much better. Hyman presented some of the issues with Akima in his paper in 1983.

In the following graph, a higher value is a higher parallel-vs-sequential difference.

That plus the Hagan-West example of a tricky curve looks a bit convoluted with it (although it does not have any negative forward).

I have used Quantlib implementation, those results make me wonder if there is not something wrong with the boundaries.

In the following graph, a higher value is a higher parallel-vs-sequential difference.

That plus the Hagan-West example of a tricky curve looks a bit convoluted with it (although it does not have any negative forward).

I have used Quantlib implementation, those results make me wonder if there is not something wrong with the boundaries.

### Akima for Yield Curve Interpolation ?

On my test of yield curve interpolations, focusing on parallel delta versus sequential delta, Akima is the worst of the lot. I am not sure why this interpolation is still popular when most alternatives seem much better. Hyman presented some of the issues with Akima in his paper in 1983.

In the following graph, a higher value is a higher parallel-vs-sequential difference.

That plus the Hagan-West example of a tricky curve looks a bit convoluted with it (although it does not have any negative forward).

I have used Quantlib implementation, those results make me wonder if there is not something wrong with the boundaries.

In the following graph, a higher value is a higher parallel-vs-sequential difference.

I have used Quantlib implementation, those results make me wonder if there is not something wrong with the boundaries.

## Sunday, June 02, 2013

### 2 Ways for an Accurate Barrier with Finite Difference

I had explored the issue of pricing a barrier using finite difference discretization of the Black-Scholes PDE a few years ago. Briefly, for explicit schemes, one just need to place the barrier on the grid and not worry about much else, but for implicit schemes, either the barrier should be placed on the grid and the grid

The fictitious point approach is interesting for the case of varying rebates, or when the barrier moves around. I first saw this idea in the book "Paul Wilmott on Quantitative Finance".

Recently, I noticed that Hagan made use of the ficitious point approach in its "Arbitrage free SABR" paper, specifically he places the barrier in the middle of 2 grid points. There is very little difference between truncating the grid and the fictitious point for a constant barrier.

In this specific case there is a difference because there are 2 additional ODE solved on the same grid, at the boundaries. I was especially curious if one could place the barrier exactly at 0 with the fictitious point, because then one would potentially need to evaluate coefficients for negative values. It turns out you can, as values at the fictitious point are actually not used: the mirror point inside is used because of the mirror boundary conditions.

So the only difference is the evaluation of the first derivative at the barrier (used only for the ODE): the fictitious point uses the value at barrier+h/2 where h is the space between two points at the same timestep, while the truncated barrier uses a value at barrier+h (which can be seen as standard forward/backward first order finite difference discretization at the boundaries). For this specific case, the fictitious point will be a little bit more precise for the ODE.

**truncated**at the barrier, or a**fictitious point**should be introduced to force the correct price at the barrier level (0, typically).The fictitious point approach is interesting for the case of varying rebates, or when the barrier moves around. I first saw this idea in the book "Paul Wilmott on Quantitative Finance".

Recently, I noticed that Hagan made use of the ficitious point approach in its "Arbitrage free SABR" paper, specifically he places the barrier in the middle of 2 grid points. There is very little difference between truncating the grid and the fictitious point for a constant barrier.

In this specific case there is a difference because there are 2 additional ODE solved on the same grid, at the boundaries. I was especially curious if one could place the barrier exactly at 0 with the fictitious point, because then one would potentially need to evaluate coefficients for negative values. It turns out you can, as values at the fictitious point are actually not used: the mirror point inside is used because of the mirror boundary conditions.

So the only difference is the evaluation of the first derivative at the barrier (used only for the ODE): the fictitious point uses the value at barrier+h/2 where h is the space between two points at the same timestep, while the truncated barrier uses a value at barrier+h (which can be seen as standard forward/backward first order finite difference discretization at the boundaries). For this specific case, the fictitious point will be a little bit more precise for the ODE.

### 2 Ways for an Accurate Barrier with Finite Difference

I had explored the issue of pricing a barrier using finite difference discretization of the Black-Scholes PDE a few years ago. Briefly, for explicit schemes, one just need to place the barrier on the grid and not worry about much else, but for implicit schemes, either the barrier should be placed on the grid and the grid

The fictitious point approach is interesting for the case of varying rebates, or when the barrier moves around. I first saw this idea in the book "Paul Wilmott on Quantitative Finance".

Recently, I noticed that Hagan made use of the ficitious point approach in its "Arbitrage free SABR" paper, specifically he places the barrier in the middle of 2 grid points. There is very little difference between truncating the grid and the fictitious point for a constant barrier.

In this specific case there is a difference because there are 2 additional ODE solved on the same grid, at the boundaries. I was especially curious if one could place the barrier exactly at 0 with the fictitious point, because then one would potentially need to evaluate coefficients for negative values. It turns out you can, as values at the fictitious point are actually not used: the mirror point inside is used because of the mirror boundary conditions.

So the only difference is the evaluation of the first derivative at the barrier (used only for the ODE): the fictitious point uses the value at barrier+h/2 where h is the space between two points at the same timestep, while the truncated barrier uses a value at barrier+h (which can be seen as standard forward/backward first order finite difference discretization at the boundaries). For this specific case, the fictitious point will be a little bit more precise for the ODE.

**truncated**at the barrier, or a**fictitious point**should be introduced to force the correct price at the barrier level (0, typically).The fictitious point approach is interesting for the case of varying rebates, or when the barrier moves around. I first saw this idea in the book "Paul Wilmott on Quantitative Finance".

Recently, I noticed that Hagan made use of the ficitious point approach in its "Arbitrage free SABR" paper, specifically he places the barrier in the middle of 2 grid points. There is very little difference between truncating the grid and the fictitious point for a constant barrier.

In this specific case there is a difference because there are 2 additional ODE solved on the same grid, at the boundaries. I was especially curious if one could place the barrier exactly at 0 with the fictitious point, because then one would potentially need to evaluate coefficients for negative values. It turns out you can, as values at the fictitious point are actually not used: the mirror point inside is used because of the mirror boundary conditions.

So the only difference is the evaluation of the first derivative at the barrier (used only for the ODE): the fictitious point uses the value at barrier+h/2 where h is the space between two points at the same timestep, while the truncated barrier uses a value at barrier+h (which can be seen as standard forward/backward first order finite difference discretization at the boundaries). For this specific case, the fictitious point will be a little bit more precise for the ODE.

Subscribe to:
Posts
(
Atom
)