Turnitin Clarity in Higher Education: Promise, Pitfalls and Practical Questions
1. Why Turnitin Clarity is on the agenda
Across higher education, generative AI has unsettled long held assumptions about written assessment. A first wave of institutional responses focused on AI detection scores embedded in tools such as Turnitin. Sector reviews now converge on the conclusion that current AI detectors are technically weak, easily evaded and prone to bias, particularly against non native English writers and some neurodivergent students (Liang et al., 2023; Stanford University HAI, 2023; Perkins et al., 2024; Jisc, 2025; Northern Illinois University CITL, 2024). Independent testing shows modest baseline accuracy that collapses when simple paraphrasing or other evasion strategies are used, with real world performance far below vendor marketing claims (Perkins et al., 2024; Elek et al., 2025; Weber-Wulff et al., 2023). In parallel, an increasing number of universities in the UK, North America, Europe and Australia have disabled or formally discontinued AI detection scores, citing accuracy, equity, due process and regulatory concerns (Blommerde et al., 2024; KU Leuven, 2023; University of Nottingham, 2024; Vanderbilt University, 2023; University of Pittsburgh Teaching Center, 2025). Turnitin has responded by repositioning itself away from a single AI score and towards an integrated environment, Turnitin Clarity, that combines a browser based writing space, an embedded AI assistant and a process oriented Writing Report (Turnitin, 2025a; Turnitin, 2025b; Turnitin, 2025c). This blog introduces Clarity for a general higher education audience, summarises what it does, and offers an evidence informed critique of its potential benefits and risks.
2. What is Turnitin Clarity?
Turnitin Clarity is an add on to Feedback Studio that integrates three components in a single workflow: a browser based Student Writing space, an AI Assistant, and a Writing Report that visualises the writing process (Turnitin, 2025a; Turnitin, 2025b). It can be understood as an example of an integrated writing analytics system that fuses writing process data, embedded AI assistance and AI writing detection into one vendor controlled environment (Leijten and Van Waes, 2013; Ranalli et al., 2018).
First, the Student Writing space replaces or supplements traditional file upload assignments. Students draft directly inside a Turnitin web editor rather than submitting a Word or PDF file. They can type, paste, format and save their work over multiple sessions. The platform records how the text develops over time, capturing detailed process data on keystrokes, revisions, pauses, and paste events (Leijten and Van Waes, 2013; Turnitin, 2025a). Second, if enabled, students see an AI Assistant panel next to the editor. They can ask for help with structure, clarity, grammar or citations, using predefined prompts or their own questions. Turnitin markets this assistant as a guide to critical thinking and writing quality rather than a tool that simply produces finished answers (Turnitin, 2025a; Turnitin, 2025c). Third, after submission, Turnitin generates a Writing Report. This sits alongside the usual Similarity and AI Writing reports in Feedback Studio and summarises the writing process, including the balance between typed and pasted text, a timeline or playback of key drafting stages, and any AI Assistant conversations that took place in the writing space (Turnitin, 2025b). In principle, Clarity is not a single score, but an ecosystem that combines writing support and process analytics.
Figure 1 below summarises the three main components of Turnitin Clarity and how they relate to one another.
FIGURE PLACEHOLDER 1 (for author use, remove before publication)
Figure 1. Overview of Turnitin Clarity components
Description: A clean, simple infographic with three panels arranged from left to right. The left panel shows the Student Writing space with an icon of a laptop and a text document, plus short labels such as typing, pasting and drafts over time. The middle panel shows the AI Assistant with a chat bubble icon and labels such as structure, clarity, language and citations. The right panel shows the Writing Report with an icon of a report that includes a timeline and a small bar chart, with labels such as process timeline, pasted text and AI indicators. Arrows flow left to right to show how student writing and AI support feed into the Writing Report.
Suggested AI image prompt:
Create a clean, modern infographic that explains an educational writing analytics platform called Clarity. Show three main blocks arranged left to right with arrows between them. Block 1: Student writing space with an icon of a laptop and a simplified text document, plus small labels like typing, pasting and drafts over time. Block 2: AI assistant with a chat bubble icon beside the laptop, and small labels like structure, clarity, language and citations. Block 3: Writing report with an icon of a report that includes a timeline and a small bar chart, with labels like process timeline, pasted text and AI indicators. Use a neutral higher education style, flat vector graphics, soft blues and greys, no brand logos or real company names, and minimal text that is clearly legible.
3. How does Clarity work in practice?
Implementation will differ by institution and virtual learning environment, but a typical Clarity workflow can be described from the perspectives of both instructors and students.
3.1 Setting up an assignment
Instructors create a Turnitin Student Writing assignment type rather than a standard file upload. They attach instructions and, where relevant, a rubric. The AI Assistant can draw on these materials when responding to student queries. For each assignment, instructors decide which tools to enable inside the writing space, for example AI chat, spelling and grammar checks, and citation checks, and whether students will be given access to their own Writing Reports (Turnitin, 2025a).
3.2 What students experience
Students access the brief and rubric, then write inside the Turnitin editor over one or more sessions. If AI tools are enabled, they can use the assistant, grammar and citation checks as they draft, and when they are ready, they submit directly from the same screen. Beneath the surface, Turnitin logs changes to the document, paste events, timing of work sessions and the content of AI queries and responses (Leijten and Van Waes, 2013; Ranalli et al., 2018; Turnitin, 2025b).
3.3 What staff see after submission
Once the due date passes, instructors can open the submission in Feedback Studio and select the Writing view. The Writing Report presents a summary of writing time and revision activity, visual indicators of where text was pasted and whether it was later edited, a timeline that allows the reader to scroll through key stages of the draft, links to the Similarity and AI Writing reports, and a record of the student's AI chat interactions in the Clarity space (Turnitin, 2025b). Turnitin positions this as providing process transparency and insight into how the work was produced, not only what the final text contains (Turnitin, 2025a).
Figure 2 offers a simplified view of how a typical Clarity enabled assignment flows from setup to submission and review.
FIGURE PLACEHOLDER 2 (for author use, remove before publication)
Figure 2. Typical workflow for a Clarity enabled assignment
Description: A horizontal flow diagram with four steps, each in its own box, connected by arrows. Step 1 is Instructor setup with an academic icon and text such as creates Student Writing assignment and chooses AI tools. Step 2 is Student drafting with a student at a laptop and text such as writes in browser editor and may use AI assistant. Step 3 is Data capture and analysis with a server or gears icon and text such as logs writing process and runs similarity and AI checks. Step 4 is Staff review with a lecturer at a computer and text such as views Writing Report, Similarity and AI indicators.
Suggested AI image prompt:
Design a horizontal process diagram that shows a four step workflow for an online writing analytics system in higher education. Step 1: Instructor setup with an academic icon and text such as creates Student Writing assignment, chooses AI tools. Step 2: Student drafting with a student at a laptop and text such as writes in browser editor, may use AI assistant. Step 3: Data capture and analysis with a server or gears icon and text such as logs writing process, runs similarity and AI checks. Step 4: Staff review with a lecturer at a computer and text such as views Writing Report, Similarity and AI indicators. Use arrows connecting the four steps from left to right. Styling should be flat, minimalist and suitable for an academic blog, with neutral colours, no brand logos and no company names.
4. What problems is Clarity trying to solve?
Clarity is Turnitin's response to growing discomfort with isolated AI detection scores. It aims to shift attention from opaque percentages towards a richer picture of the writing process. Product literature emphasises several goals: helping educators interpret any AI Writing indicators in context, supporting academic integrity investigations with process evidence, and normalising responsible use of AI by providing a controlled assistant within the writing environment (Turnitin, 2025a; Turnitin, 2025c). In theory, this represents a move away from a narrow policing mindset towards a more nuanced engagement with how students write. However, critics argue that integrated systems may simply repackage the weaknesses of first generation detectors within a more intensive data collection regime, blurring the line between formative support and surveillance (Centre for Democracy and Technology, 2025; Scassa, 2023; Jisc, 2025).
5. What does the evidence say about detection and analytics?
Any assessment of Clarity needs to be grounded in the emerging evidence base on AI writing detection and writing analytics. Three themes are particularly salient: reliability and evasion, fairness and bias, and privacy and surveillance.
5.1 Reliability and evasion
Perkins and colleagues evaluated six major generative AI detection tools using 805 text samples that had been manipulated with straightforward evasion techniques such as paraphrasing and adding noise. Baseline accuracy across tools was around 39 percent, falling to roughly 17 percent once adversarial modifications were introduced. They conclude that such tools cannot be recommended for determining whether academic misconduct has occurred, although they may have limited, non punitive uses (Perkins et al., 2024). Independent evaluations confirm volatile performance and dramatic gaps between commercial claims and observed accuracy, particularly under adversarial conditions (Elek et al., 2025; Weber-Wulff et al., 2023; Jisc, 2025). An entire industry of paraphrasing and so called AI bypass tools exists specifically to defeat perplexity based detectors (HasteWire, 2025; Jisc, 2025). In this context, process analytics are increasingly marketed as a way to compensate for unreliable product based detection, but the empirical support for using keystroke dynamics to infer AI assistance is weak and highly variable (Leijten and Van Waes, 2013; Kundu et al., 2024).
5.2 Fairness and differential impact
Fairness is an equally serious concern. Liang et al. (2023) and the Stanford Human Centred AI Institute (2023) show that several widely used GPT detectors consistently misclassify non native English writing samples as AI generated, with false positive rates above 60 percent for TOEFL essays compared with around 10 percent for native English writers. Sector commentators warn that such tools risk disproportionately harming international and English as an additional language students (Liang et al., 2023; Stanford University HAI, 2023; Jisc, 2025). Process based systems introduce new fairness questions. Tools that flag deviations from a normative writing pattern, such as unusual time on task, high pasted text or sparse revision, may disadvantage students whose writing processes do not conform to the assumed norm, including some neurodivergent students and those who rely on assistive technologies (Northern Illinois University CITL, 2024; Scassa, 2023). Clarity does not remove these risks. On the contrary, by combining AI writing scores with process analytics it introduces additional ways in which particular groups might be unfairly flagged, including students who draft offline and paste in work or who have fragmented study patterns.
Figure 3 sketches the main benefits and risks of systems like Clarity and highlights the tension between their analytic promise and the concerns raised in recent research.
FIGURE PLACEHOLDER 3 (for author use, remove before publication)
Figure 3. Potential benefits and risks of integrated writing analytics systems
Description: A balanced visual that presents potential benefits and risks side by side. One option is a set of scales with Potential benefits on one side and Risks and concerns on the other. Benefits might include insight into writing process, support for formative feedback and embedded writing support tools. Risks might include unreliable AI detection, bias against some students, privacy and data surveillance, and extra workload for staff. The design should look analytical rather than promotional.
Suggested AI image prompt:
Create a conceptual infographic that shows the trade offs of using an integrated writing analytics and AI assistant system in higher education. Use a balanced design, for example a set of scales or a left right comparison. On the Potential benefits side include short phrases like insight into writing process, support for formative feedback and embedded writing support tools. On the Risks and concerns side include short phrases like unreliable AI detection, bias against some students, privacy and data surveillance, extra workload. The style should be clean, minimalist and editorial, with icons that represent education and technology, a neutral colour palette, and no company names or logos. Keep the text brief and legible.
5.3 Privacy, data protection and surveillance
To generate the Writing Report, Clarity logs detailed behavioural data, including keystroke level edits, timing of writing sessions, paste events and the content of AI chat interactions in the writing space (Turnitin, 2025b; Turnitin, 2025c). From a data protection perspective this constitutes profiling. The UK Information Commissioner's Office and related guidance on AI and data protection stress that controllers must identify a clear lawful basis, minimise the volume and granularity of personal data collected, ensure that processing is fair, transparent and proportionate, and provide meaningful routes to contest or challenge automated or algorithmically supported decisions (ICO, 2024; Browne Jacobson, 2024). The EU AI Act similarly classifies AI used to make decisions about students in educational assessment as high risk, subject to stringent transparency, documentation and oversight requirements (European Commission, 2021). In addition, the Office of the Independent Adjudicator has already upheld a complaint in which a student challenged the use of AI detection evidence in an academic misconduct case, emphasising the need for robust consideration of bias, disability and alternative evidence (OIA, 2025a). These developments underline that keystroke level logging for routine coursework is difficult to reconcile with principles of data minimisation and proportionality, particularly if analytics are used in high stakes integrity decisions (Scassa, 2023; TEQSA, 2023).
5.4 Pedagogy, trust and workload
There are also pedagogic and cultural questions. Knowing that every keystroke and AI query is recorded may alter how students approach drafting and may inhibit experimentation or risk taking, especially among students who already feel marginalised (Scassa, 2023; Tossell et al., 2024). For staff, integrated systems represent an additional analytic layer on top of similarity scores, AI writing indicators and existing marking systems. Evidence from early adopters of detection tools suggests that far from saving time, streams of unreliable flags can significantly increase workload by generating new investigative tasks and complex casework (Bretag et al., 2011; Jisc, 2025; University of Pittsburgh Teaching Center, 2025). Sector guidance points in a different direction. QAA (2023, 2024), Ofqual (2024), TEQSA (2023) and the Russell Group (2023) all emphasise assessment redesign, authentic tasks and explicit education about generative AI, rather than reliance on detection. Clarity will align with these expectations only if it is subordinated to a design led strategy, rather than driving practice through its technical affordances.
6. Principles for a cautious approach
For institutions and educators considering Turnitin Clarity, it may be helpful to articulate some principles for cautious engagement. First, design and dialogue should come before detection. Assessment tasks that are authentic, well aligned and explicitly integrate AI literacy and reflective disclosure of AI use are a more robust foundation for academic integrity than any detection tool (Gulikers et al., 2004; Gravett, 2024; Nicol and Macfarlane Dick, 2006; QAA, 2023; Russell Group, 2023). Second, process analytics should be treated as context not verdict. Writing timelines and paste patterns can prompt constructive conversations with students about how they approached a task, but they are not decisive evidence of wrongdoing on their own. Third, given the current evidence base, AI writing indicators embedded in platforms should not be used as primary grounds for high stakes decisions. Their role, if any, should remain advisory, triangulated with other forms of evidence and with opportunities for students to respond (Perkins et al., 2024; Liang et al., 2023; Jisc, 2025; OIA, 2025a). Fourth, robust privacy and fairness safeguards are essential. Institutions should conduct data protection and equality impact assessments, provide clear information to students about what is logged and why, and put in place explicit rules on who can access process data, how long it is retained and how it may be used in integrity processes (ICO, 2024; Browne Jacobson, 2024; TEQSA, 2023). Particular attention is needed to the potential impact on international, English as an additional language and neurodivergent students (Liang et al., 2023; Northern Illinois University CITL, 2024; Scassa, 2023). Finally, any engagement with Clarity should be subject to ongoing evaluation, with staff and student feedback and a willingness to withdraw or limit features that prove misaligned with educational values (Digital Education Council, 2024a; Office for Students, 2025).
To close this section, Figure 4 summarises some practical principles that institutions can apply when they approach systems such as Clarity.
FIGURE PLACEHOLDER 4 (for author use, remove before publication)
Figure 4. Principles for cautious engagement with writing analytics and AI detection
Description: A simple principles infographic with either a circular layout or checklist style. It presents four to five short principles, for example Design first, detection second; Use process data as context, not verdict; Protect privacy and fairness; Teach AI literacy and dialogue; Review and adapt over time. Each principle is paired with a small icon that suggests assessment, dialogue or data protection.
Suggested AI image prompt:
Design a simple principles infographic for an academic blog, titled Cautious use of writing analytics and AI detection. Show four to five short principles in a circular or checklist layout, with phrases such as Design first, detection second; Use process data as context, not verdict; Protect privacy and fairness; Teach AI literacy and dialogue; Review and adapt over time. Use icons that suggest assessment, dialogue and data protection. The style should be flat, clean and suitable for higher education, with calm colours and no brand logos or platform names.
7. Conclusion: beyond detection
Turnitin Clarity is a significant development in the assessment technology landscape. It moves beyond a single AI score to a more complex assemblage of writing space, assistant and analytics. This creates genuine possibilities for richer conversations about students' writing processes and their use of AI, and it connects with long standing strands of research on writing processes, formative feedback and metacognition (Lindgren et al., 2008; Ranalli et al., 2018; Mazari, 2025). At the same time, Clarity amplifies long standing concerns about reliability, fairness, privacy and the creeping surveillance of learning that have been sharply articulated in recent critiques of AI detection and learning analytics (Scassa, 2023; Jisc, 2025; Kaushik et al., 2025; Matjola, 2025). The challenge for higher education is to resist the temptation to treat any platform as a technological fix for academic integrity and instead to situate tools like Clarity within a broader agenda of assessment redesign, AI literacy and shared responsibility for ethical practice (QAA, 2023; Office for Students, 2025; Russell Group, 2023). Used carefully, with clear safeguards and a design first mindset, process analytics may offer limited support to that agenda. Used uncritically, they risk entrenching exactly the dynamics of mistrust and over policing that generative AI has already brought to the surface.
References
Blommerde, T., Bright, W., Musgrave, E., Mitchell, R. and Heselton, R. (2024) 'AI detectors in universities: Time to turn them off and embrace AI for enhanced learning', Educational Developments, 25(4), pp. 8-11.
Bretag, T. et al. (2011) 'Academic integrity in Australia: Twenty years of national projects', in Bretag, T. (ed.) A handbook of academic integrity. Singapore: Springer.
Browne Jacobson (2024) 'Data protection in higher education: What to expect in 2024', Legal Insights, 23 January.
Centre for Democracy and Technology (2025) The shortcomings of generative AI detection: How schools should approach declining teacher trust in students.
Digital Education Council (2024a) Solving the AI governance problem. Executive Briefing 006.
Elek, A. et al. (2025) 'Evaluating the effectiveness and ethical implications of AI generated text detection tools', Information, 16(10), 905.
European Commission (2021) Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
Gulikers, J.T.M., Bastiaens, T.J. and Kirschner, P.A. (2004) 'A five dimensional framework for authentic assessment', Educational Technology Research and Development, 52(3), pp. 67-86.
Gravett, K. (2024) 'Authentic assessment as relational pedagogy', Teaching in Higher Education, pp. 1-15.
HasteWire (2025) 'Grammarly vs Turnitin: Plagiarism and AI detection guide'.
Information Commissioner's Office (ICO) (2024) Regulating AI: The ICO's strategic approach.
Jisc (2025) AI detection and assessment 2025. National Centre for AI in Tertiary Education.
Kaushik, A., Diaz, K.P., Sava, M. and Kethar, J. (2025) 'Challenges and controversies in the use of AI within academic writing'.
Kundu, D. et al. (2024) 'Keystroke dynamics against academic dishonesty in the age of LLMs', arXiv preprint arXiv:2406.15335.
KU Leuven (2023) 'Use of AI detection software in assessment: institutional position statement'.
Leijten, M. and Van Waes, L. (2013) 'Keystroke logging in writing research: Using Inputlog to rethink process analysis', Written Communication, 30(3), pp. 358-392.
Liang, W. et al. (2023) 'GPT detectors are biased against non native English writers', Patterns, 4(7), 100779.
Lindgren, E. et al. (2008) 'The effects of process oriented writing intervention based on keystroke logging',
Manchester Metropolitan University (2024) 'Guidance on the use of AI in teaching, learning and assessment'.
Matjola, I. (2025) 'Academic integrity and generative AI: A systematic literature review'.
Mazari, N. (2025) 'Building metacognitive skills using AI tools to help higher education students reflect on their learning process', Revista Humanismo y Sociedad, 13(1), pp. e4/1-e4/20.
Nicol, D. and Macfarlane Dick, D. (2006) 'Formative assessment and self regulated learning: A model and seven principles of good feedback practice', Studies in Higher Education, 31(2), pp. 199-218.
Northern Illinois University (NIU) CITL (2024) 'AI detectors: An ethical minefield'.
Office for Students (OfS) (2025) 'Embracing innovation in higher education: Our approach to artificial intelligence'.
Office of the Independent Adjudicator for Higher Education (OIA) (2025a) 'AI and academic misconduct: Case summary CS072501'.
Perkins, M. et al. (2024) 'GenAI detection tools, adversarial techniques and implications for inclusivity in higher education', arXiv preprint.
Quality Assurance Agency for Higher Education (QAA) (2023) Reconsidering assessment for the ChatGPT era: Advice on generative AI.
Quality Assurance Agency for Higher Education (QAA) (2024) Quality Compass: Navigating the complexities of the artificial intelligence era in higher education.
Ranalli, J., Feng, Y. and Chukharev Hudilainen, E. (2018) 'Keystroke analysis as a scalable solution to capture evidence for teachers' insights', Journal of Learning Analytics, 6(3), pp. 90-104.
Russell Group (2023) Russell Group principles on the use of generative AI in education.
Scassa, T. (2023) 'The surveillant university: Remote proctoring, AI, and human rights', Canadian Journal of Comparative and Contemporary Law, 9, pp. 301-342.
Stanford University HAI (2023) AI detectors biased against non native English writers.
TEQSA (2023) Advice to higher education providers: Use of generative AI in assessment and academic integrity.
Tossell, C.C. et al. (2024) 'Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading and trust in artificial intelligence', IEEE Transactions on Learning Technologies, 17, pp. 1069-1081.
Turnitin (2025a) Introduction to Turnitin Clarity Student Writing assignments.
Turnitin (2025b) Introduction to Turnitin Clarity Writing Report.
Turnitin (2025c) 'Supporting responsible AI use with Turnitin's integrated AI tools'.
University of Nottingham (2024) 'Artificial intelligence in teaching, learning and assessment: Institutional guidance'.
University of Pittsburgh Teaching Center (2025) 'Guidance on the use of AI text detection tools in assessment'.
Vanderbilt University (2023) 'Statement on Turnitin AI detection and academic integrity'.
Weber-Wulff, D. et al. (2023) 'Testing of detection tools for AI generated text', International Journal for Educational Integrity, 19(1), pp. 1-20.