Artificial Intelligence produces *self-teaching* soccer-playing robots. Pool-playing ones not far off IMO

arnaldo

AzB Silver Member
Silver Member
If you didn't see it, this was last Sunday's exciting, even scary "60 Minutes" report on present and future capabilities of Artificial Intelligence.

The single most fascinating episode of the CBS Sunday nights program "60 Minutes" that I've ever seen during my 55 years of watching it (the show began in 1968).

Excellent researching and reporting. What's currently being done with AI, and what the -- arguably controversial -- future holds, will blow your mind:


There will unquestionably be impossible-to-shark, self-teaching, self-improving (and self-aware!!) pool playing androids in the not too distant future.

Arnaldo
_________________________________________________________________________________________________________________________________
 

justnum

Billiards Improvement Research Projects Associate
Silver Member
ChatGPT or any AI is good at solving known problem patterns. That includes popular academic level assessments from PreK to introductory graduate level study. Playing high level pool is not ready to be solved as a programming problem because no one has developed a system for playing run out sets
on the table. You can all imagine completing a runout but not all runouts are the same.

Both videos in this thread do not solve existing problems in the billiard community. Existing problems like wheelchair based players or players with Parkinson's or other diseases that effect hand or vision performance.

Both videos are amazing at selling products for a general audience. However the existing pool fan likely wants tools that make them shoot better or pick out better shots or tools that make pocketing shots easier.

If there was a magic tool that created a wave of new players it would be a cue stick
that shows a player where to hit a cue ball to pocket shots or play safe. New players always want an edge. Sell them
the dream and let them find out what it takes to make the dream a reality.
 

CuesDirectly

AzB Silver Member
Silver Member
In reality, AI is Artificial Ignorance, after all, it's only as smart as those who programmed it.
 

fjk

AzB Silver Member
Silver Member
I just watched a video a few days ago of a pool playing robot. It was making simple to medium difficult shots pretty consistently. It will improve at an exponential pace. Freaking technology.
 

Banger

AzB Silver Member
Silver Member
I'm alway amazed at how fast people are to proclaim their incompetence. I'm too stupid/lazy to learn how to play pool, so I want a robot to do it for me.

No missed shots, no winners, no losers. No feeling of accomplishment.

If people think pool is boring now, wait until AI takes over.
 

JessEm

AzB Goldmember
Silver Member
Houston, we have problem. Prompted by this thread, I decided to poke around on GTP-4 a little.

My questions were about absentee ballots in the 2020 election. Specifically, "How many absentee ballots were cast for Joe Biden in 2020?"

Within a matter of 5 minutes, after questions regarding Biden's absentee votes exceeding Obama's entire vote total, Chat GTP's answer changed.

Make of this what you will.

But, simple, empirical numbers don't change. Especially within 5 minutes for the same question verbatim (literally copied and pasted the second time).

These were simple questions with absolute answers. Did my line of questioning trigger interference of some kind within its programming? Or, and this is mind boggling even to someone understanding of AI, did this effin thing LEARN an "acceptable" answer via the social and political nuance and bias connected to my questions, and answer accordingly?? WTF, ppl!

Again, they are empirically different answers.

Like a midget at a urinal, let's stay on our toes.


gt1.png

gt2.png

g3.png

gt4.png

gt5.png
 

hang-the-9

AzB Silver Member
Silver Member
The good thing about computers doing recreational stuff like chess and sports is that no-one will really care about it for taking over our lives. I doubt the APA or Matchroom is going to be putting robot players in their midst to compete with the humans LOL

I don't care one bit if a robot can run 99 racks of 10 ball or 1,600 balls in 14.1 or can strike out every MLB player. Of course, they will be able to, but so what? It's like seeing that a 6' 4" weightlifter can lift heavier things than a 14 yr old gymnast, sure he can, but there is nothing that really matters about it.

However, when you can type in a request to some AI bot to write a story, or do some math, or ask what stock to buy, or write a program to do something, that is where humans have issues with things since that is the stuff of actual meaning like work and society. In fact anything of difficulty or value can become trivial. Who would care to spend 20 years learning something when a 16 yr old with a keyboard and screen can do the same thing with some free program.
 
Last edited:

kling&allen

AzB Gold Member
Gold Member
Houston, we have problem. Prompted by this thread, I decided to poke around on GTP-4 a little.

My questions were about absentee ballots in the 2020 election. Specifically, "How many absentee ballots were cast for Joe Biden in 2020?"

Within a matter of 5 minutes, after questions regarding Biden's absentee votes exceeding Obama's entire vote total, Chat GTP's answer changed.

Make of this what you will.

But, simple, empirical numbers don't change. Especially within 5 minutes for the same question verbatim (literally copied and pasted the second time).

These were simple questions with absolute answers. Did my line of questioning trigger interference of some kind within its programming? Or, and this is mind boggling even to someone understanding of AI, did this effin thing LEARN an "acceptable" answer via the social and political nuance and bias connected to my questions, and answer accordingly?? WTF, ppl!

Again, they are empirically different answers.

Like a midget at a urinal, let's stay on our toes.


View attachment 696796
View attachment 696797
View attachment 696798
View attachment 696799
View attachment 696800
You can get similar (wrong) answers by asking it basic math problems. GPT is a language model without any concept of facts. It's just combining words and numbers based on how frequently they relate to the input you give it.

 
Top