Generative AI Policy
1. Policy Overview
The Journal of Advanced and Sustainable Technologies (ASET) recognizes the emerging use of generative artificial intelligence (GenAI) and AI-assisted technologies (e.g., ChatGPT, Copilot, DALL-E, Grammarly) in research and publishing. This policy establishes mandatory principles for transparency, accountability, and integrity for all authors, reviewers, and editors involved with the journal. These guidelines are aligned with evolving standards in academic publishing, including those set by leading organizations.
2. Core Principles
-
Human Responsibility: Humans are solely responsible for the scholarly content of any submission, including its validity, integrity, and ethical compliance. AI tools cannot be authors or assume accountability.
-
Mandatory Transparency: Any use of GenAI must be explicitly disclosed.
-
Confidentiality & Security: GenAI must not be used in ways that violate the confidentiality of unpublished manuscripts or peer review.
3. Guidelines for Authors
-
Authorship: Generative AI or AI-assisted tools must not be listed as an author or co-author. Authorship implies intellectual responsibility and accountability that AI cannot fulfill.
-
Transparent Disclosure: If GenAI tools were used in any part of manuscript preparation (e.g., for idea generation, literature synthesis, language enhancement, data analysis, or image creation), this must be disclosed in a dedicated section of the manuscript.
-
Declaration Format: Include a section titled “Declaration of Generative AI Use” in the manuscript, placed before the References. This declaration must state:
-
The name and version of the AI tool used.
-
The specific purpose(s) for which it was used.
-
That the author(s) have reviewed and take full responsibility for all content produced or modified by the AI tool.
-
-
Example Statement: “During the preparation of this work, the author(s) used [Tool Name, Version] in order to [specify purpose, e.g., improve language clarity and grammar]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.”
-
Verification: Authors are solely responsible for verifying the accuracy, originality, and correct attribution of any information, citations, or text generated by AI. Presenting AI-generated content as human-original is academic misconduct.
4. Guidelines for Reviewers and Editors
-
Confidentiality Prohibition: The use of GenAI tools to analyze, summarize, evaluate, or make decisions on a confidential manuscript is strictly prohibited. Uploading any part of a submitted manuscript, its data, or reviewer comments to a public AI platform constitutes a severe breach of peer review confidentiality and ethics.
-
Review Preparation: Reviewers must not use GenAI tools to draft their review reports. The critical assessment must be their own original intellectual work.
-
Editorial Vigilance: Editors are responsible for ensuring compliance with this policy during the manuscript handling process.
5. Policy for AI-Created Images, Figures, and Data
Any visual elements, graphics, or datasets created or significantly altered by AI must be clearly identified as such in the figure caption or the "Methods" section.
6. Related Resources & Industry Standards
This policy is informed by the guidelines and recommendations of major publishing bodies, including:
-
Elsevier’s Policy on the Use of Generative AI and AI-assisted Technologies: https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier
-
Committee on Publication Ethics (COPE)
-
The STM Association's recommendations on AI use.
7. Non-Compliance
Failure to disclose the use of GenAI, or misuse that breaches confidentiality, will result in manuscript rejection. If discovered post-publication, it may lead to a correction or retraction, following COPE guidelines.
This policy will be reviewed periodically to reflect technological advancements and evolving scholarly norms.





