Early this morning, we detected a peculiar situation in one of our journals, which typically shows moderate activity.
In a span of just two hours, we received 10 distinct articles with clear indications of having been generated using AI. All share very similar patterns: identical layout, repetitive structure, LLM-style or inconsistent writing, and content of highly questionable quality.
The submissions were made by users registered in OJS with apparently valid ORCID identifiers (though hidden in all cases). This means they passed both the ORCID and PKP verification checks.
Furthermore, all articles are co-authored. The first author is the same individual (whose ORCID is public), although this does not match the user who performed the registration and submission on the platform.
We are particularly concerned that such submissions appear to be becoming increasingly frequent and consume a considerable amount of editorial time before they can be identified and rejected.
We would like to know how the community is tackling this issue (“spam-articles” – has this term been used before?).
Specific questions:
-
What is the objective of the person or persons making these types of submissions? No such article would pass the peer-review process, so it seems like a waste of time for both them and the journal.
-
What procedures or best practices are you implementing to protect yourselves against this new phenomenon of “spam-articles”?
-
What specific tools are you using to detect and block fake users or suspicious articles? Is there already a plugin or development available to detect anomalous behaviours, such as mass user registrations within short timeframes or suspicious submissions?
Any experience or recommendation will be warmly welcomed.