In response to the increasing use of AI in academic writing, many journals have begun adopting AI statements, which outline their policies on the use of AI tools and the disclosure of AI use in manuscripts. These statements aim to ensure transparency and accountability in the use of AI, while also encouraging responsible and ethical practices in academic writing.
Journals may vary greatly in what uses for AI they find acceptable in submitted manuscripts, and how they would like its use documented. Consider where to publish early in the research process to ensure compliance.
The Committee on Publication Ethics (COPE) has been actively discussing the ethical implications of using artificial intelligence (AI) in academic writing. COPE has recognized that AI tools can be valuable assets for researchers and writers, but has also raised concerns about potential issues such as authorship, transparency, and originality.
Example of journals AI statements:
Springer Nature Statement on AI authorship:
Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
Wiley Artificial Intelligence Generated Content statement:
Artificial Intelligence Generated Content (AIGC) tools—such as ChatGPT and others based on large language models (LLMs)—cannot be considered capable of initiating an original piece of research without direction by human authors. They also cannot be accountable for a published work or for research design, which is a generally held requirement of authorship (as discussed in the previous section), nor do they have legal standing or the ability to hold or assign copyright. Therefore—in accordance with COPE’s position statement on AI tools—these tools cannot fulfill the role of, nor be listed as, an author of an article. If an author has used this kind of tool to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods or Acknowledgements section. The author is fully responsible for the accuracy of any information provided by the tool and for correctly referencing any supporting work on which that information depends. Tools that are used to improve spelling, grammar, and general editing are not included in the scope of these guidelines. The final decision about whether use of an AIGC tool is appropriate or permissible in the circumstances of a submitted manuscript or a published article lies with the journal’s editor or other party responsible for the publication’s editorial policy.
JMLA on Generative AI Submissions:
The submission of content created by generative AI is discouraged, unless it is part of formal research design or methods. Examples of content creation include writing the manuscript text, generating other content in the manuscript, as well as using the AI to generate ideas that are presented in the submitted manuscript. Software that checks for spelling, offers synonyms, makes grammar suggestions or is used to translate your own words into English does not generate new content, and we do not consider it generative AI.
If you choose to submit a manuscript with content created by generative AI systems, you must disclose and describe any use of these systems to do the following:
Write the manuscript text
Generate data, images, figures, citations
Generate ideas used in the text
Translate text other than your own words.
In doing so, you will be accepting full responsibility for the text’s factual and citation accuracy; mathematical, logical, and common-sense reasoning; and originality.