Assessing Research Integrity in the Age of AI: A Longitudinal Analysis Using an AI Misuse Impact Index

Authors

DOI:

https://doi.org/10.38140/obp4-2026-12

Keywords:

Academic integrity, AI-Mediated supervision, AI policy, artificial intelligence, ethical considerations, impact index, plagiarism tolerance, responsible AI use

Abstract

The increasing adoption of artificial intelligence (AI) in academic research has reshaped scholarly practices while introducing complex ethical risks, particularly concerning research integrity and academic misconduct. This study proposes a comprehensive quantitative and empirical framework, adapted from the Cobb-Douglas production function, to model how the misuse of AI contributes to systemic quality degradation, using retractions as a proxy for integrity breaches. By leveraging longitudinal publication and retraction data from Retraction Watch and Scopus, we construct an AI misuse impact index to track the relationship between research output and integrity risks over time. Time series lag analysis reveals that retraction rates most strongly correlate with prior publication volumes at a one-year lag, indicating the rapid manifestation of AI-driven misconduct. To identify critical intervention points, we apply piecewise linear modelling to detect thresholds where retraction rates accelerate disproportionately relative to publication growth. A plagiarism tolerance threshold is established, beyond which research quality deteriorates unsustainably. Additionally, we introduce a probabilistic damage model, quantifying the risk of systemic integrity failure as AI adoption expands. Results highlight a pronounced post-2009 rise in AI-related integrity risks, with a sharp inflection in 2023 when misconduct indicators exceeded acceptable tolerance levels, signalling a system-wide ethical crisis. The study further proposes a dynamic, data-driven method for calibrating institutional plagiarism thresholds in alignment with evolving integrity risks and patterns of AI adoption. This model enables proactive monitoring and policy adjustments, linking integrity governance directly to empirical risk indicators. The findings underscore the urgent need for adaptive, transparent AI oversight frameworks within academia, ensuring that AI complements rather than undermines the ethical and intellectual foundations of research. Future research should extend this work by integrating discipline-specific AI use patterns and developing real-time academic integrity monitoring systems.

Published

2026-03-10