Artificial Intelligence (AI) Policy
Introduction
The Editorial Board of the International Journal of Transparency and Accountability in Governance (IJTAG) acknowledges the rapid development of Artificial Intelligence (AI) and Large Language Models (LLMs) in academic contexts. While these tools may assist in certain aspects of language processing, IJTAG maintains a strict stance on the integrity of scholarly work. This policy establishes a zero-tolerance approach to the use of AI for content generation and affirms the journal’s commitment to academic authenticity, originality, and ethical authorship.
​
Scope and Purpose
This policy applies to all contributors to IJTAG—authors, peer reviewers, and editors. Its purpose is to uphold transparency, accountability, and scholarly standards in the journal’s publication ecosystem by clearly delineating the acceptable and unacceptable uses of AI.
​
1. USE OF AI BY AUTHORS
1.1 Assistive Tools (Permitted)
Authors may use basic assistive tools solely for:
-
Grammar correction
-
Spelling checks
-
Language refinement (e.g., Grammarly, Microsoft Editor, Quillbot in grammar mode)
Disclosure is not required for these uses. However, the final manuscript must reflect the author’s own scholarly input, originality, and critical reasoning.
1.2 Generative AI Tools (Strictly Prohibited)
The use of Generative AI tools (e.g., ChatGPT, Gemini, Claude, GitHub Copilot, etc.) is strictly prohibited for:
-
Drafting or generating any part of the manuscript
-
Producing summaries, data analysis, or interpretation
-
Creating citations, references, or visual materials
Authors must not:
-
Use generative AI to produce any part of the content submitted
-
Submit AI-generated or AI-influenced work, even if edited afterward
-
Attribute authorship to AI tools in any capacity
Violations of this policy will result in immediate rejection of the manuscript and may lead to permanent blacklisting.
​
2. Use of AI by Reviewers
Peer reviewers are expected to uphold academic confidentiality and ethical review practices.
-
Minor use of grammar or clarity tools is permitted.
-
Inputting manuscript content into AI platforms that retain or learn from user data is strictly forbidden.
-
Generative AI may not be used to draft any part of the review report.
Breach of these standards will result in removal from the reviewer database and possible formal action.
​
3. Use of AI by Editors
Editors may employ AI tools for administrative purposes such as:
-
Reviewer identification
-
Formatting assistance
However, editors must not use AI to generate decision letters, review summaries, or comments on confidential submissions. Use of confidential content with AI platforms that collect data is strictly prohibited.
​
4. Similarity Limit and Plagiarism Detection
To maintain academic integrity:
-
All manuscripts are subjected to Turnitin plagiarism checks
-
A 20% maximum similarity index (excluding references and standard legal texts) is strictly enforced
-
Manuscripts exceeding this limit or containing plagiarized or AI-generated content will be immediately rejected
​
5. Investigation and Sanctions
In suspected cases of AI misuse:
-
A confidential investigation will be conducted by the Editorial Board
-
An opportunity to be heard will be provided to the Author(s)
-
Depending the response and confirmed violations may result in rejection, retraction, or permanent blacklisting
-
The Board reserves the right to inform affiliated institutions in serious cases
​
6. Policy Status and Updates
This policy is provisional and subject to periodic review in line with global academic norms, AI technology advancements, and ethical publication standards.