Saturday, June 27, 2009

Pulseaudio Nightmares - Pure ALSA to the Rescue

In the latest stable Ubuntu (9.04), pulseaudio still does not work reliably on my hardware (intel HDA with digital SPDIF out). I upgraded to 9.10 and had even more problems. Sound always worked on boot, but often broke down after a while. And I could not find easy ways to make it work other than rebooting... Killing/restarting pulseaudio, looking at the processes using snd did not work.

One thing works wonderfully, pure ALSA. To have multilple apps sharing ALSA, I just use dmix. As I use digital out, there is no mixer, but ALSA can provide one through softvol. It works really well. ALSA is already not that simple to configure/setup properly, but with pulseaudio on top, welcome to your worst configuration nightmares.

Here is the .asoundrc I use:
 
pcm.amix {
type dmix
ipc_key 50557
slave {
pcm "hw:0,1"
period_time 0
period_size 1024
buffer_size 8192
}
bindings {
0 0
1 1
}
}

pcm.softvol {
type softvol
slave {
pcm "amix" #redirect the output to dmix (instead of "hw:0,0")
}
control {
name "PCM" #override the PCM slider to set the softvol volume level globally
card 0
}
}


pcm.!default {
type plug
slave.pcm "softvol" #make use of softvol
}

Pulseaudio Nightmares - Pure ALSA to the Rescue

In the latest stable Ubuntu (9.04), pulseaudio still does not work reliably on my hardware (intel HDA with digital SPDIF out). I upgraded to 9.10 and had even more problems. Sound always worked on boot, but often broke down after a while. And I could not find easy ways to make it work other than rebooting... Killing/restarting pulseaudio, looking at the processes using snd did not work.

One thing works wonderfully, pure ALSA. To have multilple apps sharing ALSA, I just use dmix. As I use digital out, there is no mixer, but ALSA can provide one through softvol. It works really well. ALSA is already not that simple to configure/setup properly, but with pulseaudio on top, welcome to your worst configuration nightmares.

Here is the .asoundrc I use:
 
pcm.amix {
type dmix
ipc_key 50557
slave {
pcm "hw:0,1"
period_time 0
period_size 1024
buffer_size 8192
}
bindings {
0 0
1 1
}
}

pcm.softvol {
type softvol
slave {
pcm "amix" #redirect the output to dmix (instead of "hw:0,0")
}
control {
name "PCM" #override the PCM slider to set the softvol volume level globally
card 0
}
}


pcm.!default {
type plug
slave.pcm "softvol" #make use of softvol
}

Monday, June 15, 2009

Java int Overflow Behavior

A coworker recently asked me if there was a guaranteed behavior in the case of int overflow. He gave the specific example on:
can we rely that int x = Integer.MAX_VALUE + 1 is the same for every JVM on any platform?

I thought the answer would be easy to find in the Java specifications document. But I was wrong. It is not clearly defined.

I found a trick that suggests this behavior is indeed standard and will stay the same, it is related to type casting. Java guarantees that the cast of a long just truncates the long to int precision. Therefore if
long l = Integer.MAX_VALUE;
l = l +1;
x = (int)l;
x has a guaranteed value.

http://java.sun.com/docs/books/jls/third_edition/html/conversions.html
paragraph 5.1.3

Java int Overflow Behavior

A coworker recently asked me if there was a guaranteed behavior in the case of int overflow. He gave the specific example on:
can we rely that int x = Integer.MAX_VALUE + 1 is the same for every JVM on any platform?

I thought the answer would be easy to find in the Java specifications document. But I was wrong. It is not clearly defined.

I found a trick that suggests this behavior is indeed standard and will stay the same, it is related to type casting. Java guarantees that the cast of a long just truncates the long to int precision. Therefore if
long l = Integer.MAX_VALUE;
l = l +1;
x = (int)l;
x has a guaranteed value.

http://java.sun.com/docs/books/jls/third_edition/html/conversions.html
paragraph 5.1.3

Static Fields and Inheritance

Someone asked me recently to find out the real reason why the code from this thread fails. This is a fairly bad code, and not even a very good way to point out the problem. But the question is nonetheless interesting.

class Toto extends TotoParent{

final static Toto a = new Toto ("a");

public Toto(String a){
super(a);
}
}

import java.util.ArrayList;
import java.util.List;

public abstract class TotoParent {

static List list = new ArrayList();

public TotoParent(String a) {
list.add(a);
}

protected static List get() {
return list;

}
}

import org.junit.Test;
import static org.junit.Assert.*;

public class TotoTest {

@Test
public void testGet(){
assertEquals(1, Toto.get().size());
}
}
I am quite used to static initialization, and would have answered the same as the first answer in the thread:
"Get is static and associated with TotoParent, so that is the same as calling TotoParent.get().size()". I would have even thought that the compiler would compile the call Toto.get() to TotoParent.get(). But running javap, you can see it is still compiled as TotoParent.get(). So there is still a lookup done. This is why the first answer is actually not that correct.

The important bit here is that Toto is never initialized, even if we call Toto.get(). The java specs (invaluable reference) explains clearly that calling a static method not declared in the class does not initialize the class.

Calling Toto.get() is not exactly the same as calling TotoParent.get().
If TotoParent.get() called another TotoSuperParent.get():
Toto.get() -> TotoParent.get() -> TotoSuperParent.get()
We compile then later we change to make TotoParent have a specific implementation of get(). Toto will then be automatically aware of it, without even recompiling it.

http://java.sun.com/docs/books/jls/third_edition/html/execution.html
paragraph 12.4.1

Static Fields and Inheritance

Someone asked me recently to find out the real reason why the code from this thread fails. This is a fairly bad code, and not even a very good way to point out the problem. But the question is nonetheless interesting.

class Toto extends TotoParent{

final static Toto a = new Toto ("a");

public Toto(String a){
super(a);
}
}

import java.util.ArrayList;
import java.util.List;

public abstract class TotoParent {

static List list = new ArrayList();

public TotoParent(String a) {
list.add(a);
}

protected static List get() {
return list;

}
}

import org.junit.Test;
import static org.junit.Assert.*;

public class TotoTest {

@Test
public void testGet(){
assertEquals(1, Toto.get().size());
}
}
I am quite used to static initialization, and would have answered the same as the first answer in the thread:
"Get is static and associated with TotoParent, so that is the same as calling TotoParent.get().size()". I would have even thought that the compiler would compile the call Toto.get() to TotoParent.get(). But running javap, you can see it is still compiled as TotoParent.get(). So there is still a lookup done. This is why the first answer is actually not that correct.

The important bit here is that Toto is never initialized, even if we call Toto.get(). The java specs (invaluable reference) explains clearly that calling a static method not declared in the class does not initialize the class.

Calling Toto.get() is not exactly the same as calling TotoParent.get().
If TotoParent.get() called another TotoSuperParent.get():
Toto.get() -> TotoParent.get() -> TotoSuperParent.get()
We compile then later we change to make TotoParent have a specific implementation of get(). Toto will then be automatically aware of it, without even recompiling it.

http://java.sun.com/docs/books/jls/third_edition/html/execution.html
paragraph 12.4.1

Wednesday, June 03, 2009

Benchmarking Languages Is Difficult

I often looked at the famous computer languages shootout for fun. Recently I noticed they had the infamous thread ring test. I posted not very long ago several blog entries about it showing how silly this test was.

Looking at the existing Java implementation for the test I decided to try to submit the tricky one using a pool of thread, and pooling message processing rather creating 1 thread per node. To my surprise, it was accepted without questions and I did have the best score for a Java program for a while. Shortly after someone else copied my program and got rid of various stuff not useful for the particular benchmark (breaking the interesting part of the design) and got accepted as well with of course a better result.

I decided to see if I could make an even more silly program - tailored for the test only. I managed to be orders of magnitude faster - 1 thread, no synchronization, everything processed in a FIFO (linkedlist) queue. This is actually a standard way to reimplement recursion. But I was honest enough not to hide that I consider that kind of program to cheat the test and got my entry in the "interesting alternatives".

In reality there is no difference in the "cheating" between my new program and the program that got accepted in the official list, they both cheat by using only 1 thread and process everything 1 by 1. There is not 1 thread per node in any of the program, and they can avoid any concurrency issues. One "looks" better because it uses a pool of 503 threads (but really use only 1 or 2 threads) and the other does not hide its use of 1 thread for processing. But this is not evident to people accepting the programs.

When I look at the haskell code, I can not really tell if it is creating 503 threads in the language or a pool or ..., you have to know each language quite well and sometimes it is not that easy to define what cheating is. Therefore this kind of benchmark is a bit disappointing. One should force the use of the same algorithm. But can you do so (a functional language won't use the same algo as a procedural one)?

Benchmarking Languages Is Difficult

I often looked at the famous computer languages shootout for fun. Recently I noticed they had the infamous thread ring test. I posted not very long ago several blog entries about it showing how silly this test was.

Looking at the existing Java implementation for the test I decided to try to submit the tricky one using a pool of thread, and pooling message processing rather creating 1 thread per node. To my surprise, it was accepted without questions and I did have the best score for a Java program for a while. Shortly after someone else copied my program and got rid of various stuff not useful for the particular benchmark (breaking the interesting part of the design) and got accepted as well with of course a better result.

I decided to see if I could make an even more silly program - tailored for the test only. I managed to be orders of magnitude faster - 1 thread, no synchronization, everything processed in a FIFO (linkedlist) queue. This is actually a standard way to reimplement recursion. But I was honest enough not to hide that I consider that kind of program to cheat the test and got my entry in the "interesting alternatives".

In reality there is no difference in the "cheating" between my new program and the program that got accepted in the official list, they both cheat by using only 1 thread and process everything 1 by 1. There is not 1 thread per node in any of the program, and they can avoid any concurrency issues. One "looks" better because it uses a pool of 503 threads (but really use only 1 or 2 threads) and the other does not hide its use of 1 thread for processing. But this is not evident to people accepting the programs.

When I look at the haskell code, I can not really tell if it is creating 503 threads in the language or a pool or ..., you have to know each language quite well and sometimes it is not that easy to define what cheating is. Therefore this kind of benchmark is a bit disappointing. One should force the use of the same algorithm. But can you do so (a functional language won't use the same algo as a procedural one)?