AI Policy and Ethical Framework

Responsible AI principles

Our vision

At Extrica, we believe technology should strengthen - not replace - human judgment in scholarly publishing. Artificial intelligence (AI) can accelerate editorial workflows, improve precision, and support quality assurance, but its role must always remain transparent, accountable, and ethical.

This policy defines how we integrate AI responsibly across our publishing processes. It outlines the values, commitments, and safeguards that guide every AI-assisted decision - ensuring trust among authors, reviewers, editors, and readers.

AI supports us in streamlining and strengthening the publication process - analyzing manuscripts, verifying citations and references, detecting plagiarism, and optimizing production workflows. Each AI-assisted operation is subject to mandatory human validation before any data is stored or published, maintaining the highest standards of editorial integrity.

Extrica operates all AI-assisted workflows within secure, access-controlled environments. Data exchanged between systems is encrypted in transit and at rest and processed exclusively within infrastructures that comply with recognized information security and data protection standards. These safeguards ensure that sensitive scholarly content remains protected at every stage of the publishing lifecycle.

This policy is applied in conjunction with Extrica’s publication ethics framework and is informed by guidance issued by the Committee on Publication Ethics (COPE) and other recognized best-practice bodies.

Definitions

For the purposes of this policy, artificial intelligence (AI) refers to computational systems that perform tasks typically requiring human intelligence, including the analysis, generation, classification, or transformation of text, images, data, or other content. Generative AI refers to AI systems capable of producing new content based on prompts or input data, such as text, images, code, or visualisations. References to AI tools in this policy include both generative and non-generative AI systems used to support publishing workflows.

Our guiding principles

Human-Centered Oversight. Every AI process in our ecosystem is subject to human judgment. Editors, reviewers, authors and production specialists remain responsible for validating AI-generated results and maintaining editorial integrity.

Transparency and Explainability. Authors and editors deserve to know when and how AI is used. We clearly communicate what each AI system does, what data it processes, and how human oversight ensures accuracy.

Privacy and Data Stewardship. We protect personal and confidential data through strict privacy safeguards. AI tools used by Extrica operate under data protection agreements.

Accountability and Continuous Updates. We monitor our AI systems and update our practices in line with evolving ethical, legal, and technological standards.

AI in the publication process

Below we outline how Extrica currently uses AI tools throughout the publishing workflow.

Article submission and metadata extraction

Upon submission, Extrica uses AI-powered document processing tools (such as Azure AI Document Intelligence) to parse manuscripts. These tools automatically extract key details like the title, authors’ names and affiliations, emails, abstract, keywords, and reference list from the uploaded PDF or Word files.

All automatically extracted metadata is subject to human validation: The authors themselves validate the completeness and accuracy of the metadata before the proofing stage — confirming whether any elements are missing from the submitted article, such as author emails, ethical statements, or funding information etc.

Research Integrity checks

To safeguard research integrity, Extrica deploys plagiarism detection and research integrity screening tools across all submissions. We use industry-recognised services such as Crossref Similarity Check (powered by iThenticate) to identify textual overlap with published literature.

In addition, Extrica uses specialised research integrity tools, including Clear Skies, to support editors in identifying potential integrity risks that may warrant further scrutiny. These tools apply AI-assisted and pattern-based analyses to detect indicators associated with paper mills, duplicate or recycled submissions, manipulated or unnatural scientific language, unusual authorship or submission patterns, and citation risks linked to previously flagged or retracted content.

The outputs of these tools do not replace editorial judgment or peer review but provide risk-based signals that help editors determine where additional checks or clarification may be appropriate.

Scope match and relevance screening

Extrica has developed an internal AI module for article relevancy scoring. This tool evaluates how well a submitted manuscript’s content aligns with the aims and scope of the target journal. Using natural language processing, it examines elements like the title, abstract, keywords, and even full text sections to gauge subject matter fit. The AI generates a relevancy score or flag for each submission, helping to identify out-of-scope manuscripts early.

This is a decision-support tool to assist Editors-in-Chief and editorial board members in the desk review stage.

Crucially, a human editor always reviews the AI's suggestion: if the tool flags a submission as likely out-of-scope, an Assistant Editor or Editor-in-Chief will manually verify the content against the journal's scope before making any rejection or transfer decision. This ensures that borderline cases get a fair human assessment.

Reviewer identification

We use Prophy, an AI-based editorial support tool, to help identify relevant reviewers and match reviewer expertise to the scope of submissions. Prophy uses AI to recommend suitable peer reviewers based on manuscript content and reviewer profiles.

Post-acceptance production

We use Azure OpenAI services to assist in copyediting, formatting, and proofreading. It is used to automatically identify and tag entities (like author names, institutions, references) and process text to match our publication style, check for consistency in terminology, to parse and format equations, tables, and figure captions, adapting the manuscript to the layout requirements of our journals etc. All such AI usage is closely overseen by our production team.

Data Protection

Throughout all these AI applications in our workflow, Extrica is committed to protecting personal data and confidential content. We choose AI tools and platforms that ensure adequate personal data protection measures.

Extrica enters into necessary data protection agreements with third-party AI tools providers ensuring that Extrica retains control over the data and that it is used solely to provide the intended service to us, not to train broad AI models. For more information on how we process personal data please see our Privacy Policy.

For authors

Extrica permits the use of AI tools to support manuscript preparation, including language refinement, proofreading, formatting, and improvements to clarity or structure. These assistive uses do not require disclosure.

Authors remain fully responsible for the content of their submissions at all times and must ensure accuracy, originality, and compliance with Extrica’s standards of scholarly quality and research integrity

Disclosure of AI use

If authors use AI tools as part of the research process (e.g., as a component of the study design, analysis, or generation of research outputs) or in a way that materially influences the writing or presentation of the manuscript, this must be disclosed in the cover letter at the submission stage and in the acknowledgments section of the manuscript. This includes specifying the AI tool used and describing how it was used.

Responsibility

Authors remain fully responsible for all content generated with the assistance of AI tools and for ensuring compliance with Extrica’s standards of scholarly integrity, ethical publishing, and data protection. As generative AI systems may produce inaccurate, incomplete, or fabricated content - including incorrect factual statements or non-existent citations - authors must independently verify the accuracy, originality, and completeness of all AI-assisted outputs and confirm all referenced sources against the original scholarly literature.

Protection of research data and results

AI tools must not be used to generate, modify, enhance, or manipulate original research data or findings. This includes, but is not limited to, images, figures, blots, photographs, radiological outputs, measurements, or any other primary research materials.

Authorship

Generative AI tools may not be credited as authors or co-authors of a published work. Authorship is limited to individuals who meet established criteria for intellectual contribution and accountability.

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.

Copyright, confidentiality, and data protection obligations

When using AI tools in any permitted capacity, authors must carefully assess copyright, privacy, and confidentiality considerations before uploading text, data, or other materials to AI platforms. Authors are responsible for ensuring that they hold the necessary rights to all materials submitted to such tools, including any third-party copyrighted content.

For reviewers

Use of AI in Peer Review

Peer review is a scholarly responsibility that must reflect the reviewer’s own expertise and independent judgment. The use of generative AI tools to draft, generate, or substitute peer review reports, in whole or in part, is not permitted.

Manuscripts under review, including any supplementary materials, must not be uploaded to publicly available generative AI platforms. Doing so may breach confidentiality obligations and create risks related to copyright, privacy, security, and the protection of confidential information. For the same reasons, reviewers must not use publicly available AI tools to edit, refine, or otherwise process review reports or manuscript content.

Reviews found to rely inappropriately on generative AI tools will be excluded from the editorial evaluation process. In such cases, the reviewer’s eligibility for future review assignments may be reconsidered.

Suspected Inappropriate Use of AI

Where a reviewer has reason to believe that a submitted manuscript involves inappropriate or undisclosed use of generative AI, the concern should be raised with the handling editor as part of the review process.

If an editor identifies or suspects improper use of generative AI in a manuscript or a peer review report, the matter will be assessed in accordance with this policy and, where appropriate, escalated for further guidance.

Concerns relating to the inappropriate or undisclosed use of generative AI in published content will be examined through a coordinated review process led by the Editor-in-Chief and Extrica, in line with applicable COPE guidance and Extrica's policies.

For editors

Editorial teams may use AI-enabled tools to assist with research integrity and quality control activities, such as identifying potential copyright violations, unauthorised content reuse, excessive paraphrasing, or plagiarism.

However, general-purpose, publicly accessible generative AI platforms must not be used for these purposes, as submitting manuscripts or review materials to such systems may expose confidential content and create risks related to privacy, copyright, and intellectual property protection.

Only AI tools that operate within secure environments and provide appropriate safeguards for data protection, confidentiality, and copyright compliance may be used in editorial workflows. Prior to deployment, these tools must be subject to appropriate due diligence, and their terms of use must align with applicable legal, ethical, and publishing standards.

AI-assisted tools are intended to support, not replace, editorial judgment. Editors must independently review and assess all AI-generated findings before making any editorial decisions.

Policy Violations & Consequences

Non-compliance with this policy may result in editorial or publisher action, depending on the nature and severity of the breach. Such actions may include, but are not limited to, rejection of a submission, correction or retraction of published content, exclusion of a peer review report from editorial consideration, removal from reviewer or editorial roles, or further investigation in accordance with Extrica's policies and applicable COPE guidance.