That does make sense. Until we find some more reliable tool in the future what else can we do? It's still way better than removing authentic writing users or punishing them.
If needed if an article gets too many reports of being written by AI, they can reduce distribution or give the person a warning. What else can be done?