This is an open letter to all my fellow journal editors out there. You can, and should, use deterministic artificial intelligence (improved versions of kind we’ve used for decades in spell-checking applications.) to help you find reviewers when you are overwhelmed by the possible selection. These tools will tell you what various potential reviewers have published, and give you reasonably sound recommendations for reviewers that you can easily accept or reject based on sound and accurate editorial guidelines.
This is NOT true for Large Language Model (LLM) based artificial intelligence. They will recommend reviewers that are not competent to review articles. In a particularly egregious case, I was asked to review an article on the effectiveness of cancer treatments. I am completely incompetent to review such an article as I have not, at any time in my life, studied how to assess the efficacy of cancer treatments. But the LLM sees my clinical and COVID-related work and guesses that I am competent to do so!
The editor should have caught that, but might have not even been given enough information to catch the error, and might not have had time to do their own vetting after the recommendation. This kind of error tells me that no one should be using LLMs for this purpose. The risk to science itself is too high. Don’t do it!
Donald Derrick