I would say no it's not fraud and not even dishonest -- it's actually kind of honest, open and direct in that they put the text right there.
The fraudster is whoever is submitting any paper they were asked to review to a LLM instead of properly reviewing it.
A LLM is not intelligent and not capable of reviewing a research paper accurately.
The AI can look like they are doing what you ask them for, but that is not exactly the case.
As the whole matter of prompt injection shows.. they are actually looking for signals which can be very different than the signals you think the LLM is looking at.
And there can and are very unexpected (As far as humans can tell) interactions between training data and prompts, and what actually happens.
These are the kind of agents that suggested putting glue on a pizza, LOL. There's a good chance the LLM trained on an Opinion piece, and that will affect the outcome of a so-called "review" by a LLM. Like someone writes on Reddit this random crap that the text of the paper pings on, and you'll have a negative or positive line in a review that is definitely not objective or reasonable.