How do the big budget trackers work?
Probably with an expensive camera, high frame rate, and dedicated installation and software.How do the big budget trackers work?
I agree this is probably the correct approach.I think for slower speeds you can get a close to exact speed.
For faster speeds you use AI. Basically you want to look at cueball distortion (The cueball should look more ovalish the faster it is going).
To do this you want to setup a high speed camera and a low speed camera. Shoot a bunch of shots at different speeds and record them all.
Now with the high speed camera calculate their actual speed. Take the frames from the low speed camera and put them into a CNN (convolutional neural network) after fitering out the non-CBs as they are noise. Have the outputs of the CNN relate to discrete speeds.
Now train the CNN using the actual values from the high speed camera with their related distorted pictures. You could even have the speed you calculate with distance over time from the slow camera be an input to the CNN (discretized of course, this way might not best the best way, however), or in the alternative have multiple frames be inputs to the CNN.
To make this work properly you probably will have to create training data with a variety of low speed cameras and framerates/resolutions.
To me this seems like the best way to do it.
In the alternative you could measure the length of one side of the distorted oval vs the other side of the oval, and place the distorted size measurements into a regression model classifier and have it predict the actual speed. Again you will need a variety of training data. This will attain a 'non-discrete' "exact" value, unlike their former method which will result in discretized finite state predictions.
In general any digital camera has a weak microcontroller (MCU),an attached USB interface chip, a single A/D and a decoder to address the individual photo cells (pixels) (ie: attach them serially to the A/D via a scan algorithm). How fast the scanning of cells via the decoder takes place depends upon the particular camera and the resolution mode it is in.
There is also no guarantee that the scanning takes place in top to bottom row by row linear fashion. Hence the need for a lot of training data to deal with these variables. I would assume as well that there is no guarantee the time between frames is guaranteed to be constant. (The fact that the MCU will have to handle interrupts from the USB chip upon receiving USB frames will alone cause this to not be guaranteed constant) This could add noise as well.
Maybe with a cell phone camera the video results are placed directly in memory via a shared bus, but even this will have potential delays. It really depends upon the design of the architecture. Either way, the data transfer of the video frames from the camera's MCU to the memory of the main CPU will cause the advertised period between frames to vary.
I agree this is probably the correct approach.
The technology bottleneck for a consumer version of openCV tracking is the camera.
Converting an off the shelf computer camera into a high speed camera would solve the precision problem with less capable cameras.
An alternative to high speed tracking is post processing for total displacement measurement. The images could be overlayed to show the still balls from the moving. The moving ball will have a tail that can be measured from the composite of images.
This is cool! I heard that golf balls have chips in them, why not cue balls? I'm curious how the inside of the ball with chip would look like and not impede playablity.The product I am developing is a cue ball with an IMU in it. Having solved all of the obvious problems already (weight, balance, shock, smoothness, charging, battery life, connectivity, radio range, etc) and having already developed the app, I have time for minor improvements.
The ball gives very accurate spin information. Adding in ball velocity and ball diameter, you get impressively accurate tip location. Integrating for accurate velocity with an under sampled IMU compared to the impulse time is impossible. Instead I have the user input the distance between the OB and CB with a choice of a couple different methods, cue lengths being the easiest and fastest. They use a simple slider, then take their shot, and wait a few seconds. Time series data appears and they select the impact time. This results in very accurate velocity and thus satisfies the requirement.
Experimenting with trying to make things completely hands free requires automatic measurement of cue ball velocity. This seems to only be doable with an outside observer, I.e. a radar gun or a camera. This increases cost and complexity without much gain, actually worse performance.
I am finding that the distance/time selection method is the most accurate and worth the small user interaction required compared to all other hands free methods I have tried.
Here is an example of the app. The middle window is the distance slider based on diamonds and cue lengths. The bottom slider shows time series data of rotational magnitude (blue) and acceleration (green).
View attachment 698834
View attachment 698835
I've managed to keep track of the most commonly asked questions / biggest concerns. This is the most common one.This is cool! I heard that golf balls have chips in them, why not cue balls? I'm curious how the inside of the ball with chip would look like and not impede playablity.
I'm looking forward to buying one of these balls!I've managed to keep track of the most commonly asked questions / biggest concerns. This is the most common one.
The answer lies in making the sensor's overall density equal to the density of the resin being used, and was not easy to accomplish.
The antenna characteristics, shock resistance, center of mass, weight and manufacturing process are all related variables.
... it was a long iterative process.