Optical Flow Density with Open CV

nataddrho

www.digicue.net
Silver Member
Pubo got me interested in exploring robust ways to track cue ball speed with a commercial webcam. In his demonstration he used a high-speed camera and bright lighting to track ball paths based on the light reflection points of the balls.

With lesser equipment in a more common setup, how can we get ball speed based on frame rate and motion tracking?

Here I used a white HSV filter to identify cue ball color, and then applied Gunnar Farneback's algorithm to track the motion between two consecutive frames. Intensity (value) is proportional to ball speed, and color (hue) represents the directon of motion.

The results are noisy but interesting, as they can be mapped to speed after a homographic projection. Needs a lot of refining.

Optical Flow Density
 
Play slower shots and use more efficient mathematics to approximate distances.
 
Can someone provide code for something better than this? I’d be grateful.

Check out the link to the snooker GitHub I posted in the last thread.

I'm curious why you aren't just measuring how many pixels the center of your blob./ Circle moves each frame? That's what the optical golf ball monitors do.
 
what in the blue monkey hell is this sorcery? what is the purpose behind this? how does this in practical terms make someone a better pool player? or is it just for analysis or wasting time? My God, am I dead yet?
 
what in the blue monkey hell is this sorcery? what is the purpose behind this? how does this in practical terms make someone a better pool player? or is it just for analysis or wasting time? My God, am I dead yet?
Given that I spent nearly as much time on my arcos golf app as I did on the golf course last year.... ppl love to waste their time analyzing their game. Sometimes they even fluke into learning something from it.
 
chatGPT may help
This isn't CGPT?

what in the blue monkey hell is this sorcery? what is the purpose behind this? how does this in practical terms make someone a better pool player? or is it just for analysis or wasting time? My God, am I dead yet?
I for one would like to see shot geometry depicted for all to see. It is after all how the balls connect. If people got the concept burned into their minds eye, they'd miss less.
 
This isn't CGPT?


I for one would like to see shot geometry depicted for all to see. It is after all how the balls connect. If people got the concept burned into their minds eye, they'd miss less.
I do that by watching my shots on the table in real time and setting them back up when I make a mistake.
 
what in the blue monkey hell is this sorcery? what is the purpose behind this? how does this in practical terms make someone a better pool player? or is it just for analysis or wasting time? My God, am I dead yet?
In this case this was for wasting time. There is no direct application for this, just a silly experiment
 
I do that by watching my shots on the table in real time and setting them back up when I make a mistake.
Yes but with actual depictions of shot alignments burned into your memory, seeing "da angles" will be as familiar as "the balls are the round things".
Seriously, contact point alignment is so fundamental and easy to learn, all it needs is proper depiction. It's so concise, they pile it with excuses of every kind to do another method instead. But I digress. Back to your normally scheduled programming.
 
On a side note that table looks incredibly fast. I was expecting a ball to stop near the rail, instead it hits the rail and rebounds a fair distance off the rail.
 
what in the blue monkey hell is this sorcery? what is the purpose behind this? how does this in practical terms make someone a better pool player? or is it just for analysis or wasting time? My God, am I dead yet?

I see a simple way to use this for training. Setup a set pattern, have a pro/good player shoot it, track the locations and speeds of the balls. Set it up again for someone else learning, track their ball locations and speeds, compare and see what could have been done better, in actual numbers instead of feel.
 
Play slower shots and use more efficient mathematics to approximate distances.

I think for slower speeds you can get a close to exact speed.

For faster speeds you use AI. Basically you want to look at cueball distortion (The cueball should look more ovalish the faster it is going).

To do this you want to setup a high speed camera and a low speed camera. Shoot a bunch of shots at different speeds and record them all.

Now with the high speed camera calculate their actual speed. Take the frames from the low speed camera and put them into a CNN (convolutional neural network) after fitering out the non-CBs as they are noise. Have the outputs of the CNN relate to discrete speeds.

Now train the CNN using the actual values from the high speed camera with their related distorted pictures. You could even have the speed you calculate with distance over time from the slow camera be an input to the CNN (discretized of course, this way might not best the best way, however), or in the alternative have multiple frames be inputs to the CNN.

To make this work properly you probably will have to create training data with a variety of low speed cameras and framerates/resolutions.

To me this seems like the best way to do it.

In the alternative you could measure the length of one side of the distorted oval vs the other side of the oval, and place the distorted size measurements into a regression model classifier and have it predict the actual speed. Again you will need a variety of training data. This will attain a 'non-discrete' "exact" value, unlike their former method which will result in discretized finite state predictions.

In general any digital camera has a weak microcontroller (MCU),an attached USB interface chip, a single A/D and a decoder to address the individual photo cells (pixels) (ie: attach them serially to the A/D via a scan algorithm). How fast the scanning of cells via the decoder takes place depends upon the particular camera and the resolution mode it is in.

There is also no guarantee that the scanning takes place in top to bottom row by row linear fashion. Hence the need for a lot of training data to deal with these variables. I would assume as well that there is no guarantee the time between frames is guaranteed to be constant. (The fact that the MCU will have to handle interrupts from the USB chip upon receiving USB frames will alone cause this to not be guaranteed constant) This could add noise as well.

Maybe with a cell phone camera the video results are placed directly in memory via a shared bus, but even this will have potential delays. It really depends upon the design of the architecture. Either way, the data transfer of the video frames from the camera's MCU to the memory of the main CPU will cause the advertised period between frames to vary.
 
Last edited:
Pubo got me interested in exploring robust ways to track cue ball speed with a commercial webcam. In his demonstration he used a high-speed camera and bright lighting to track ball paths based on the light reflection points of the balls.

With lesser equipment in a more common setup, how can we get ball speed based on frame rate and motion tracking?

Here I used a white HSV filter to identify cue ball color, and then applied Gunnar Farneback's algorithm to track the motion between two consecutive frames. Intensity (value) is proportional to ball speed, and color (hue) represents the directon of motion.

The results are noisy but interesting, as they can be mapped to speed after a homographic projection. Needs a lot of refining.

Optical Flow Density
Keep me updated! That would be awesome project:)
 
Back
Top