Apple's ML Vision framework is much more advanced than Google's Firebase ML-Kit. Not only is Apple Vision easier to implement, it can track 16 simultaneous objects, whereas the Firebase ML-kit can only handle 5. In other words a max of a 15 ball drill versus a a maximum of a 4 ball drill. Or to be more anticipatory, tracking an entire 8-ball game with iOS.
I'm unsure how many simultaneous objects can be tracked with the TensorFlow lite SDK, but I'm almost positive it's not much more than Google's Firebase ML-kit. One could use Android NDK and build the full TensorFlow suite into the app which would handle more objects and have better classification than Apple Vision, however, the Android AI chipset isn't as advanced as the iPhone's and would be a bigger drain on the battery and lacks in performance in comparison.
TL;DR Android AI sucks in comparison to Apple Vision.