Generative Artificial Intelligence (AI) tools, including large language models (LLMs) and multimodal systems, continue to develop rapidly and are increasingly used across research, publishing, and communication activities.
Lontar Physics Today (LPT) recognises the potential of Generative AI tools to support scholarly work, particularly in idea exploration, improving language clarity for non-native authors, and facilitating research dissemination. This policy provides guidance for authors, editors, and reviewers and may be updated as AI technologies and ethical standards continue to evolve.
Generative AI tools may produce various forms of output, including text, images, audio, and synthetic data. Examples of such tools include ChatGPT, Copilot, Gemini, Claude, DALL-E, Midjourney, and similar systems. Despite their capabilities, current AI tools also present several limitations and risks.
Key risks associated with current Generative AI technologies include:
Inaccuracy and bias: AI-generated outputs are probabilistic rather than factual and may contain errors, fabricated information, or embedded bias.
Lack of attribution: AI tools often fail to provide appropriate scholarly attribution for ideas, quotations, or sources.
Confidentiality and intellectual property risks: Many AI tools operate on third-party platforms that may not adequately protect confidential data or intellectual property.
Unintended reuse: User inputs or outputs may be reused by AI providers (e.g., for model training), potentially affecting author and publisher rights.
Authors
Authors remain fully responsible for the originality, validity, and integrity of their submissions. When using Generative AI tools, authors are expected to exercise careful judgment, review all outputs critically, and ensure compliance with the journal’s authorship and publication ethics policies.
Responsible use of Generative AI tools may include, but is not limited to:
Idea generation and conceptual exploration
Language editing and clarity improvement
Interactive literature searching using AI-assisted tools
Literature classification and organisation
Coding or technical assistance
Authors must ensure that all submitted content meets accepted standards of scholarly rigor, validation, and accountability, and that the intellectual contribution remains that of the author. Authors are encouraged to consult the journal editor if uncertain about the appropriateness of AI use beyond language-related support.
Generative AI tools must not be listed as authors, as authorship entails responsibility, accountability, and legal agreements that can only be assumed by human contributors.
Disclosure of AI Use
Any use of Generative AI tools must be transparently disclosed within the manuscript. The disclosure should state the name and version of the tool, how it was used, and the purpose of its use. For journal articles, this statement should be included in the Methods or Acknowledgments section.
The journal reserves the right to assess whether AI tools have been used appropriately and responsibly prior to publication.
Authors should ensure that any AI tool used provides adequate safeguards regarding confidentiality, data security, and intellectual property protection.
Manuscripts should not be submitted where Generative AI tools have effectively replaced core researcher or author responsibilities, such as:
Unreviewed AI-generated text or code
Synthetic data used to replace missing or uncollected data
Inaccurate AI-generated content, including abstracts or supplementary materials
Such cases may be subject to editorial review or investigation.
At present, LPT does not permit the use of Generative AI for the creation or manipulation of images, figures, or original research data for publication. This includes charts, tables, visual data representations, code-based figures, and formulas. Image manipulation refers to adding, removing, altering, or obscuring specific elements within a figure.
Use of Generative AI technologies should always involve human oversight and transparency. Ethical guidance on AI continues to develop, and LPT will revise this policy as necessary to reflect emerging standards and best practices.
Editors and Peer Reviewers
To safeguard confidentiality and intellectual property, editors and peer reviewers must not upload unpublished manuscripts, associated files, or review materials into Generative AI systems.
Editors
Editors are responsible for maintaining the confidentiality of submissions and the integrity of the peer-review process. Any use of AI tools by editors must be authorised and consistent with these responsibilities.
Peer Reviewers
Peer reviewers should not use Generative AI to analyse or summarise submitted manuscripts. AI tools may be used only to improve the language of review reports, with reviewers retaining full responsibility for the content and accuracy of their evaluations.