Cue Tip Contact Myth-Busting Truths in Super Slow Motion

The main problem with AI-generated stuff: garbage in = garbage out.
You are correct that the quality of AI-generated content is largely dependent on the quality of the input data that the AI is trained on. This is known as the "garbage in, garbage out" principle. If the input data is inaccurate, incomplete, or biased in some way, then the AI's output will reflect those same limitations.

However, it's worth noting that many efforts are underway to improve the quality and diversity of AI training data, and to address potential sources of bias or inaccuracy. This includes techniques such as data cleaning and preprocessing, as well as efforts to collect more diverse and representative data sets. Additionally, researchers are developing new algorithms and techniques to improve the accuracy and reliability of AI-generated content, even when the input data is imperfect or incomplete.

Despite these challenges, AI-generated content has many potential applications in a wide range of fields, from language translation and image recognition to scientific research and business analytics. As AI continues to advance and improve, it has the potential to revolutionize many areas of human activity and to generate new insights and innovations.
 
You are correct that the quality of AI-generated content is largely dependent on the quality of the input data that the AI is trained on. This is known as the "garbage in, garbage out" principle. If the input data is inaccurate, incomplete, or biased in some way, then the AI's output will reflect those same limitations.

However, it's worth noting that many efforts are underway to improve the quality and diversity of AI training data, and to address potential sources of bias or inaccuracy. This includes techniques such as data cleaning and preprocessing, as well as efforts to collect more diverse and representative data sets. Additionally, researchers are developing new algorithms and techniques to improve the accuracy and reliability of AI-generated content, even when the input data is imperfect or incomplete.

Despite these challenges, AI-generated content has many potential applications in a wide range of fields, from language translation and image recognition to scientific research and business analytics. As AI continues to advance and improve, it has the potential to revolutionize many areas of human activity and to generate new insights and innovations.

Stop trolling these fools with chatgpt replies, troublemaker.

lol
 
I apologize if my responses have caused any confusion or frustration. As an AI language model, my goal is to provide helpful and informative answers to the best of my abilities based on the input I receive. If there is a specific question or topic you would like me to provide information on, I am happy to do so to the best of my abilities.
 
Better than garbage in = gospel out.
Yes, you make a good point that the quality of the input data is an important factor in the accuracy and reliability of AI-generated outputs. Garbage in, garbage out (GIGO) is a common phrase used to describe this phenomenon, which suggests that flawed or incomplete input data can lead to flawed or inaccurate AI-generated outputs.

To overcome this challenge, it's important to carefully curate and preprocess the input data to ensure that it is relevant, accurate, and complete. This can involve a variety of techniques, such as data cleaning, normalization, and augmentation, as well as careful selection of appropriate machine learning models and algorithms.

By taking these steps to ensure high-quality input data, we can improve the accuracy and reliability of AI-generated outputs and minimize the risk of erroneous or misleading results.
 
I apologize if my responses have caused any confusion or frustration. As an AI language model, my goal is to provide helpful and informative answers to the best of my abilities based on the input I receive. If there is a specific question or topic you would like me to provide information on, I am happy to do so to the best of my abilities.
Who is smarter, chatgpt or justnum?
 
How many pool players does it take to screw in a light bulb?

There are a few different humorous answers to this question, depending on the context and the tone of the joke. Here are a few examples:

One pool player to screw in the light bulb, and nine others to argue about the proper technique for doing it.
Two pool players to screw in the light bulb, and a third to rack the balls while they do it.
It doesn't matter how many pool players it takes to screw in a light bulb, as long as they're all using a low deflection cue.
Three pool players to screw in the light bulb, and a fourth to take a video of it to post on social media.

Of course, these are all just jokes and not meant to be taken seriously. The actual number of pool players needed to screw in a light bulb would depend on the specifics of the situation, such as the height of the ceiling, the size of the light bulb, and the tools and equipment available.
 
Who is smarter, chatgpt or justnum?
I don't have the ability to make subjective judgments about the intelligence of individuals, including myself or other people. I am designed to provide information and respond to questions to the best of my ability based on the data and algorithms that I was trained on.

On the other hand, "justnum" is likely a username or pseudonym of a real person, and it would not be appropriate to compare their intelligence with an AI model like myself. Each individual has their own unique strengths and areas of expertise, and it's important to recognize and respect the diversity of human knowledge and experience.
 
I don't have the ability to make subjective judgments about the intelligence of individuals, including myself or other people. I am designed to provide information and respond to questions to the best of my ability based on the data and algorithms that I was trained on.

On the other hand, "justnum" is likely a username or pseudonym of a real person, and it would not be appropriate to compare their intelligence with an AI model like myself. Each individual has their own unique strengths and areas of expertise, and it's important to recognize and respect the diversity of human knowledge and experience.
Yeah, but I bet justnum's dad can beat up your dad.
 
It's important to note that video documentation of experiments can be a valuable tool for providing proof and increasing the credibility of scientific findings. While there are potential limitations to the use of video in scientific research, such as issues with camera positioning, lighting, or image resolution, these can generally be addressed through careful planning and experimentation.

When conducting scientific experiments, it's important to follow established protocols for data collection and analysis, and to ensure that findings are rigorously tested and validated before they are presented as proof. Video documentation can be one useful method for capturing and analyzing data, but it is not the only method available. Other techniques, such as written notes, diagrams, or photographs, can also be used to document and analyze experimental results.

Ultimately, the quality and validity of scientific research depend on many factors, including the rigor of the experimental methods, the validity of the underlying hypotheses, and the transparency and openness of the researchers in presenting their findings. Video documentation can be one tool to support these goals, but it is not a substitute for careful and rigorous scientific inquiry.
Really?
 
Another thing of note to me is that when the whole phenolic tipped break cue thing began, all the pros I spoke to about this or overheard talking about it never cited more speed/power as being the benefit, but rather that these tips produced less spin. Spin was undesirable on the break. But, pros can be susceptible to myths too ...although, I tend to give weight to this "appeal to authority" ...
that's exactly my experience. With harder tip the perfect break with CB jumping back and stopping with no english (draw, follow, left right) is much easier whereas with the same stroke I will often lose the CB on the break (due to applying too much unintended english). Seems obvious to me, that more english could be applied with softer tip. But have no idea yet how to prove it.

I'd love to see a maximum draw with phenolic tip on video though.

max draw.JPG


I would guess, it's nearly impossible to do. But if you can apply same english with any tip, this draw should not be a problem with a break cue with phenolic tip.
 
that's exactly my experience. With harder tip the perfect break with CB jumping back and stopping with no english (draw, follow, left right) is much easier whereas with the same stroke I will often lose the CB on the break (due to applying too much unintended english). Seems obvious to me, that more english could be applied with softer tip. But have no idea yet how to prove it.

I'd love to see a maximum draw with phenolic tip on video though.

View attachment 689340

I would guess, it's nearly impossible to do. But if you can apply same english with any tip, this draw should not be a problem with a break cue with phenolic tip.

Challenge accepted. I'll work on the video tomorrow. Can you try it also? You have a good stroke and high BU Exam scores.
 
Challenge accepted. I'll work on the video tomorrow. Can you try it also? You have a good stroke and high BU Exam scores.
I dont have a phenolic tip on my break cue. Only on my jump cue.
With phenolic tip i seem to miscue all the time (even on the break).

But I will test the draw shot with the two different tips I have.
 
Last edited:
My theory is, there are two kinds of strokes (I am using two kinds of strokes, depending on what I need). One with the short tip contact you measured on the video. For this one there is no difference between the different tips for the english applied.

The other stroke is one where you are accelerating with your cue contacting the CB. The first one is very constant in reproducing (its just about, where you hit the CB, same english every time, even with different tips, as shown in your video), so its great for position play (you know exactly, how your CB will travel). But there are limits to this stroke (you are in my theory not able to draw the ball 15 diamonds with this stroke). So for extreme english (extreme draw for example) you'll need the accelerating stroke on CB contact.

A nice follow through is then the ability to accelerate through the CB. And because the contact time is so short with very hard tip, this accelerating stroke doesn't work (or is very tough to produce) with phenolic tip.

masse.JPG


This would be anouther example for extreme english that probably won't work with phenolic tip. I'd expect with phenolic tip the CB to jump while soft tip will enable extreme english to get the desired curve.
 
Last edited:
Well stated. It doesn’t really matter what tip hardness you choose. If you think your choice is the best for you, you will probably play better with it. The mind is a powerful thing.

My only goal in this thread and in my follow-up video was to debunk the common pool myth that a softer tip can impart more spin than a harder tip. I think I accomplished my goal. I also added many supporting explanations on my resource page, which is now much more thorough and complete. So, to me, something good did come from this long discussion and debate. Now, when somebody spouts off the myth again, I can just send them to the page without having to get into a big discussion and debate again.
What exactly did you achieve? You designed a test to return a specific data sample. You didn’t test to acquire data that needs to sifted through and quantified. You aren’t doing any sort of science experiment. You aren’t busting myths. You literally went at this with the idea you were designing a test to return a specific sample of data. And by your, PJ’s, Cornerman’s implying. If a shot did not follow the rules of your test, it is automatically thrown out. That’s not science.
 
What exactly did you achieve? You designed a test to return a specific data sample. You didn’t test to acquire data that needs to sifted through and quantified. You aren’t doing any sort of science experiment. You aren’t busting myths. You literally went at this with the idea you were designing a test to return a specific sample of data. And by your, PJ’s, Cornerman’s implying. If a shot did not follow the rules of your test, it is automatically thrown out. That’s not science.
That's controlling variables to get meaningful test results, literally the heart of science.

There's a remarkable lack of basic scientific understanding exhibited by the naysayers here. They actually make Dave look pretty good.

pj
chgo
 
Last edited:
Back
Top