Ethics & limits

Epistemic, social, and environmental considerations

NoteLearning objectives
  • Name and reason about the major ethical tensions in AI use for science.
  • Identify the epistemic risks of outsourcing judgment to a model.
  • Make grounded data-handling decisions using your institution’s policies.

Epistemic risks

  • Homogenisation: everyone using the same model converges on the same framings, blind spots, and errors. The default LLM answer becomes the default scientific framing.
  • Deskilling: offloading judgement tasks can erode the skills needed to catch the model when it’s wrong. The risk is highest for trainees who use AI before developing baseline competence.
  • Calibration failure: LLMs present confident, fluent answers regardless of underlying certainty. Used uncritically, this corrodes the habits of calibrated scientific writing.
  • Epistemic injustice: Kay, Kasirzadeh, and Mohamed (2024) argue that generative AI can amplify testimonial injustice (whose voices are heard) and hermeneutical injustice (whose conceptual frameworks are usable), particularly outside dominant languages and traditions.

Authorship and attribution

As of early 2026, the major publishers and bodies have converged on three positions, with meaningful variation in implementation detail:

  1. AI cannot be listed as an author. ICMJE, Nature, Science, and most major publishers are unambiguous on this point. Authorship requires the ability to be accountable for the work: to respond to post-publication queries, accept responsibility for errors, and make corrections. AI systems cannot do this. Listing an AI as an author is not permitted.

  2. AI use must be disclosed. The threshold for “material use” varies by journal, but the convergent norm is: if AI assisted in writing, data analysis, image generation, or code, that use must be described in the methods or acknowledgments section. Most journals require specifying which tool, for what purpose, and at what stage.

  3. Verbatim AI-generated text must be marked. Using AI output as a draft and then editing it falls under disclosure. Passing AI output verbatim as your own prose may constitute a policy violation or, in assessment contexts, academic misconduct. The distinction between “AI-assisted” and “AI-generated” is increasingly policed by institutional and journal policies.

A practical taxonomy that maps onto these three positions:

Tier Label What it means
Assistive AI supports human work The researcher directs, judges, and writes. AI assists with grammar, formatting, code scaffolding, or literature triage. Disclosure required; the work is unambiguously the researcher’s.
Generative AI produces substantial content AI drafts prose, generates figures, or produces analytical code that is used with editing. The researcher is accountable for the final output. Full disclosure of tool, purpose, and stage required.
Prohibitive Not permitted AI listed as author; AI-generated text passed verbatim without disclosure; AI used in peer review without journal permission. These are policy violations regardless of how useful the AI was.

The practical question at every use point is: which tier am I in? Assistive use is broadly permitted under disclosure norms. Generative use is permitted at most journals under explicit disclosure. Prohibitive use is not permitted.

The underlying reason the Generative tier requires disclosure but is still permitted: the distinction the ICMJE and most journals draw is between inquiry and fabrication. Using AI to assist inquiry — to move faster, to draft, to search — is a method choice, and methods must be disclosed. Passing AI output as your own intellectual product without disclosure misrepresents the origin of the work. The policy is not about restricting tools; it is about maintaining accurate attribution of intellectual contribution.

WarningThese policies change. Verify before teaching.

ICMJE, Nature, and Science update their policies on an ongoing basis. Always link to the current online version of each policy, not a cached date-stamped snapshot. The summaries above reflect the convergent position as of early 2026.

The distinction worth teaching:

  • Authorship requires accountability for the work. AI cannot be accountable. AI is never an author.
  • Acknowledgment and disclosure is required when AI was used materially. The norms for “materially” are still forming. Defaults are converging on “any non-trivial use”.
  • Citation is for ideas and methods, not for the AI tool. Cite the underlying paper the AI surfaced. Cite the method the AI implemented.

Environmental and labour costs

Honestly stated:

  • Pretraining a frontier model uses substantial energy. Per-query inference is small. Aggregate inference is not.
  • Annotation, preference labelling, and content moderation depend on human labour, often outsourced under difficult conditions.
  • Neither cost invalidates the technology. Both should shape how often and how thoughtlessly you reach for it.

Limits worth naming

  • LLMs are not reliable sources of truth about the world.
  • They are not reliable arbiters of quality, novelty, or significance.
  • They do not “understand” causality, statistics, or biology in any robust sense, though they can produce outputs that look like they do, especially in the dominant subfields of their training distribution.
  • They are not your peer reviewer. They are not your IRB. They are not your statistical consultant.
  1. A collaborator wants to list ChatGPT as a co-author on a manuscript because it “drafted half the introduction”. What is the convergent publisher position, and what is the underlying reason?
  2. Your institution has a “zero-retention” API tier with the model vendor. Can you safely paste de-identified clinical data into it? What questions would you ask first?
  3. Name one epistemic risk and one labour or environmental cost of frontier-model use, and explain why the two require different responses.

Answers: 1. AI cannot be listed as an author. Authorship requires accountability: the ability to respond to post-publication queries, accept responsibility for errors, and issue corrections. An AI system cannot be accountable. Disclosure of the AI’s contribution in the methods or acknowledgments is the right path, not authorship. 2. Probably not without further checks. “Zero-retention” tiers still typically retain briefly for abuse monitoring and are not equivalent to on-prem. De-identified clinical data may still be subject to HIPAA, IRB protocols, BAAs, and the original donor consent. Ask: does my institution have a signed BAA with this vendor? Does the IRB protocol cover transmission to a third-party model? What did the donor consent permit? 3. Epistemic: homogenisation. Everyone using the same model converges on the same framings and blind spots, eroding scientific diversity. Response: name AI use, vary tools, keep your own framing. Labour and environmental: annotation work performed under difficult outsourced conditions, and aggregate inference energy that is not trivial. Response: use AI deliberately, not thoughtlessly, and favour vendors with stronger labour and disclosure practices.

Further reading