Going to bundle my replies into one post (hopefully I don't leave anybody out):
Have a question about the measurments, how quiet does the surrounding area need to be? I would think 2-3 tables breaking/shooting in the same area may confuse the device.
To test the quality of the environment, just do this: start a recording for a few seconds and stop it (don't break and don't talk - we want a baseline) then look at the waveform and zoom all the way out (on touch screen devices, just drag your finger down until it stops zooming out.) What do you see?
What you're looking at is the audio waveform centered in the display area. A silent room will be a straight line through the center. The louder a sound is, the more the waveform will expand away from the center. What you're looking for, is to see if the this ambient baseline's waveform extends to the top/bottom of the display area (called 'clipping') on a regular basis. A few single lines jumping to the top/bottom is fine - but if you see 'clipping' on a regular basis, then the room is probably too noisy. Sometimes, you may see a single spike that is more than just a single pixel - this could be a person talking or some other periodic noise. These types of sounds can usually be overcome by the break recognition, but it depends on what types of sounds they are.
Downloaded the app for my blackberry and tried it out in the basement last night. I am going to buy it, but I have had mixed results with it so far.
The BlackBerry devices have something called Auto Gain Control (AGC) on the microphone. Basically, this means that as sounds are recorded, they are "decimated" such that everything is roughly the same volume. Somebody talking or rustling paper ends up being about the same volume as somebody hitting a break shot. AGC plays havoc on any audio recognition technology (voice recognition, voice printing, etc.) and Break Recognition is no different.
In our defense: There has been a great amount of research done in the area of audio analysis for patterns that resemble breaks, and our results exceed those in published papers. However, when you add AGC into the mix, the challenge is that much greater.
So, what can you do about it?
First, try to find a place to set the phone that works best. I realize this sounds like the silly, obvious answer, but let me explain further. If you can control the environmental noise (in your home), then placing the phone on the rail isn't necessary. Instead, try placing the phone about 10 feet away. The trick here is that if the 'tip hit' sound isn't too loud, then it won't trigger the AGC. The break will, but that's okay. Try adjusting the distance until you find the most reliable results. The danger in placing the phone too far from the table is that the echo of a break sounds remarkably like a break

, and can confuse the software. These echos are very rare, though. In all of our testing, we have only run into them once - at the US open - because we were not able to place the phone on the table, and because the room was large and... well... echoy.

We were still able to find a good spot that gave us fantastic results from about 25 feet away.
You have another option - if it misses a break that you really want to know the speed of, you can edit the detection points yourself. On the BlackBerry, you can roll the trackball to find the markers (just below the waveform display), select them and move them left/right. There's more information on our site about exactly how to do this in our BlackBerry FAQ.
Can you figure out a way to use the sound on the iPod without using headphones.
Yep. If you look in our FAQ, there is a link to a small microphone that you can purchase for the iPod. Or just search google for "ipod microphone". The one I use looks like a small black eraser.
This is the microphone I used during development and the same microphone we used during the US Open broadcast. (Most people didn't realize this, but that was an iPod we used, not an iPhone.)