Policy on the Use of Artificial Intelligence and AI-Assisted Technologies

This policy applies to all participants in the Journal’s editorial process, including:

  • Authors who submit materials to the Journal.
  • Reviewers who conduct expert evaluation of submitted materials.
  • Members of the editorial board and editorial staff.

Use of AI tools by authors. Authors may use AI tools to improve the quality of their manuscripts, provided that the following conditions are met:

Responsibility for content. Authors bear full responsibility for the accuracy, originality, and reliability of the entire content of their manuscript, including parts created or edited using AI tools. The use of AI does not release the author from responsibility for plagiarism, fabrication, falsification of data, or other violations of academic integrity.

Transparency and disclosure. Authors must clearly indicate in the "Acknowledgements" section or in the “Methods and Data” section information about the use of AI tools during the preparation of the manuscript. It is necessary to specify the name of the AI tool, its version, and the purpose of its use (for example, for text editing, grammar checking, data analysis, image generation, etc.).

Limitations regarding authorship. AI tools cannot be listed as authors of manuscripts. Authorship belongs only to individuals who have made a significant intellectual contribution to the research and preparation of the publication.

Verification of generated content. Authors must carefully verify any text, data, or images generated by AI tools for accuracy, contextual relevance, and the absence of bias or errors.

Use for data analysis. When AI tools are used for data analysis, authors must clearly describe the analysis methodology, including the AI tools used, in the "Methods and Data" section.

Use for image generation. If AI tools were used to generate images or other visual materials, authors must indicate this in the captions to the relevant elements and in the "Methods and Data" section.

Use of AI tools by reviewers

Reviewers may use AI tools to improve the efficiency of the review process, provided that the following conditions are met:

Confidentiality. Reviewers are not allowed to upload confidential manuscript materials (including full text, data, figures, etc.) to AI tools that do not guarantee confidentiality and data security.

Objectivity. The use of AI tools must not affect the objectivity and impartiality of the review process. The reviewer bears full responsibility for the content of their review.

Disclosure. Reviewers are encouraged to inform the editorial board about the use of AI tools during the preparation of the review, if applicable.

Use of AI tools by the editorial board and staff

The editorial board and Journal staff may use AI tools to optimize editorial processes, such as:

  • Plagiarism checking (using specialized tools).
  • Checking formatting and compliance with the Journal’s requirements.
  • Assistance in selecting potential reviewers.
  • Analysis of trends in scientific publications.

At the same time, it is necessary to strictly adhere to the principles of confidentiality, data security, and the avoidance of bias that may be inherent in some AI tools. Decisions regarding the acceptance or rejection of a manuscript are always made by the editorial board based on the scientific value and compliance with the Journal’s standards, and not solely on the basis of results obtained using AI tools.

Ethical aspects and responsibility

Plagiarism. The use of text generated by AI tools without proper citation of the source and/or without substantial revision by the author may be considered plagiarism.

Bias. AI tools may generate content containing biases that reflect the data on which they were trained. Authors, reviewers, and editors must critically evaluate AI-generated results and avoid the dissemination of biased content.

Transparency. Maximum transparency regarding the use of AI tools is a key element of responsible scientific publishing.

Responsibility. The ultimate responsibility for compliance with the ethical norms and standards of the Journal lies with authors, reviewers, and members of the editorial board.

Handling violations

Any detected cases of improper or unethical use of AI tools will be reviewed by the editorial board in accordance with the Journal’s procedures for handling violations of publication ethics. This may include requesting clarification from the author/reviewer, rejection of the manuscript, retraction of a published article, or other measures.

Recommended wording for authors

In the article (in the “Acknowledgements” section or in the “Methods and Data” section):

During the preparation of this manuscript, the AI tool [name, version] was used for [task]. The authors confirm that all data and conclusions are reliable and have been verified.”

Recommendations for authors

  • Use AI as an auxiliary tool: Use AI for editing, translation, or generating individual ideas, but do not rely on it to create the main content.
  • Verify facts and sources: AI may generate false or fabricated information. Always verify facts and references, especially if they were generated using AI.
  • Avoid plagiarism: The use of AI does not exempt authors from responsibility for plagiarism. Ensure that all sources are properly cited and that generated text is original or appropriately attributed.

We follow the Elsevier Generative Artificial Intelligence Policy for Scientific Journals; the COPE Policy on Artificial Intelligence (AI) in Decision-Making.