Artificial Intelligence (AI) Policy
Brainwave: A Multidisciplinary Journal
Brainwave: A Multidisciplinary Journal is committed to maintaining the highest standards of academic integrity, transparency, and ethical scholarly communication. With the rapid development and increasing adoption of Artificial Intelligence (AI), including generative AI systems, the journal recognizes the potential benefits as well as risks associated with these technologies.
AI tools can assist researchers in improving efficiency, language quality, data analysis, and visualization. However, improper or undisclosed use of such technologies may compromise research integrity, authorship accountability, and the reliability of scholarly publications.
This policy establishes clear guidelines governing the ethical and responsible use of AI tools in all stages of the publication process involving Brainwave: A Multidisciplinary Journal.
This policy applies to all participants involved in the publication process, including:
- Authors submitting manuscripts
- Peer reviewers evaluating manuscripts
- Editors and editorial board members
- Editorial office staff and journal administrators
The policy governs the use of AI technologies in:
- Manuscript preparation
- Data analysis and visualization
- Peer review processes
- Editorial screening and decision-making
- Post-publication communications
For the purposes of this policy, AI tools refer to computational systems capable of generating, analysing, modifying, or assisting with content creation using machine learning, natural language processing, or deep learning technologies.
Examples include:
- Generative language models (e.g., ChatGPT, Gemini, Claude)
- AI-assisted writing and grammar tools
- Automated translation software
- Image generation or enhancement systems
- AI-based data analytics or visualization platforms
- Automated code generation systems
These tools may assist research processes but cannot replace human scholarly responsibility.
All uses of AI technologies within the publication process must adhere to the following core principles:
Transparency
Any use of AI tools must be clearly disclosed.
Accountability
Human authors remain fully responsible for all content, interpretations, and conclusions.
Integrity
AI tools must not be used to fabricate, falsify, or manipulate research data or scholarly arguments.
Ethical Compliance
AI use must comply with internationally recognized research ethics guidelines.
5.1 Acceptable Uses
Authors may use AI tools for limited purposes that support the preparation and presentation of scholarly work, including:
- Language editing and grammar correction
- Improving readability or formatting
- Code assistance or debugging
- Data visualization
- Translation of text
- Generating summaries of publicly available literature
- Image enhancement that does not alter scientific meaning
All outputs generated using AI must be carefully reviewed and verified by the authors.
5.2 Disclosure of AI Use
Authors must clearly disclose the use of AI tools in a dedicated section of the manuscript mentioning clearly where in the content AI has been used.
Example statement:
“The authors used [AI Tool Name] to assist with language editing and formatting of the manuscript.”
If no AI tools were used:
“No generative AI tools were used in the preparation of this manuscript.”
5.3 AI and Authorship
AI tools cannot be listed as authors or co-authors because they:
- Cannot assume responsibility for research
- Cannot approve final manuscripts
Only human contributors who meet authorship standards may be credited as authors.
Authors must ensure that all figures, illustrations, and images accurately represent the underlying data.
Permitted Uses
AI tools may be used for:
- Image sharpening
- Noise reduction
- Resolution enhancement
- Data visualization
Prohibited Uses
- AI-generated images representing experimental results that did not occur
- Manipulating scientific images to alter interpretation
- Creating synthetic data without disclosure
- Misrepresenting AI-generated visuals as real experimental outcomes
AI-assisted tools for coding or statistical analysis may be used if:
- Methods are clearly described in the manuscript
- Data processing procedures are reproducible
- AI outputs are independently verified by researchers
Authors remain responsible for ensuring accuracy and reproducibility.
Peer reviewers are entrusted with maintaining the confidentiality and integrity of the review process.
8.1 Confidentiality
Reviewers must not upload or input manuscript content into AI systems that store, retain, or train on submitted data, including publicly available AI platforms.
8.2 Limited Acceptable Uses
Reviewers may use AI tools for limited purposes such as:
- Improving grammar in review reports
- Clarifying publicly available concepts
However, AI must not be used to generate peer review decisions or replace critical scholarly judgment.
8.3 Transparency
If AI tools were used to assist in preparing a review report, reviewers are encouraged to inform the editor.
Editors may use AI technologies to support editorial workflows but must maintain full human oversight.
Permitted Uses
Editors may use AI for:
- Initial manuscript screening
- Identifying potential plagiarism
- Language quality assessment
- Reviewer recommendation systems
- Editorial workflow management
Restricted Uses
AI must not be used to:
- Independently determine acceptance or rejection
- Replace editorial evaluation
- Generate editorial decisions without human review
Final decisions always remain the responsibility of the editor.
Reviewers must not upload or share the manuscript with AI systems that store or train on submitted data. This protects:
- Author confidentiality
- Intellectual property
- Unpublished research findings
This requirement aligns with best practices recommended by Committee on Publication Ethics and major publishers.
The journal reserves the right to use AI detection and research integrity tools to evaluate manuscripts for:
- AI-generated text
- Fabricated references
- Manipulated images
- Data anomalies
If concerns arise, authors may be asked to provide:
- Original datasets
- Draft versions of manuscripts
- Detailed methodological explanations
All participants must comply with international ethical standards and guidelines, including those issued by:
- Committee on Publication Ethics
- International Committee of Medical Journal Editors
- UNESCO (Recommendation on the Ethics of Artificial Intelligence)
AI tools must not be used in ways that introduce:
- Bias or discrimination
- Misinformation
- Privacy violations
- Intellectual property infringement
Failure to comply with this AI policy may lead to actions including:
- Manuscript rejection
- Retraction of published articles
- Correction notices
- Temporary or permanent bans on submissions
- Information to authors’ institutions
- Reporting to research ethics bodies
Investigations will follow international publication ethics standards.
Given the rapidly evolving nature of AI technologies, this policy will be reviewed periodically and updated when necessary to reflect emerging ethical, technological, and publishing developments.
Committee on Publication Ethics. (2023). Guidance on artificial intelligence and authorship.
International Committee of Medical Journal Editors. (2023). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
Elsevier. (2023). Artificial Intelligence policies for authors, reviewers, and editors.
Springer Nature. (2023). Guidelines on the use of AI tools in research publications.
Wiley. (2023). AI author policies for scholarly publishing.



