In 2024, a Maryland high school student named Ailsa Ostovitz learned firsthand the unforeseen costs of the global rush to police AI use in education. When she turned in a deeply personal essay about her love of music, her teacher ran the submission through a commercial AI detection tool, which incorrectly flagged the work as machine-generated and downgraded her grade. Ostovitz, who never used AI to draft the assignment, told NPR her frustration was overwhelming: “I write about music. I love music. Why would I use AI to write something that I like talking about?”
Ostovitz’s experience is far from an isolated incident. It has become an increasingly common reality as generative AI tools like ChatGPT have upended academic norms across North America, Europe, and now the Caribbean. While mainstream headlines have centered chaos in U.S. and U.K. campuses, students across Caribbean nations—including Belize, where learners are currently preparing for CSEC and CAPE School-Based Assessments (SBAs)—are now navigating a web of new AI rules, detection tools, and punitive consequences that did not exist just two years ago.
There is no denying that AI-facilitated academic misconduct is a growing challenge. A 2024 investigation by The Guardian, drawing on official data from 131 U.K. universities, recorded nearly 7,000 confirmed AI-related cheating cases that academic year—equivalent to 5.1 cases per 1,000 students, up sharply from just 1.6 cases per 1,000 the year prior. A separate survey from the Higher Education Policy Institute found that 88% of all students now use generative AI for their assessments, a jump from 53% just 12 months earlier. Researchers at the University of Reading tested popular AI detection systems and found that 94% of fully AI-generated work slipped past the tools undetected. Traditional plagiarism, by contrast, has plummeted: what once made up nearly two-thirds of all academic misconduct has now been displaced by a harder-to-detect, harder-to-define form of academic dishonesty. Cheating has not disappeared—it has fundamentally transformed.
Casey Cuny, a 23-year veteran California high school English teacher, summed up the scale of the shift in a 2025 Associated Press interview: “The cheating is off the charts. It’s the worst I’ve seen in my entire career.” At St. Peter’s University in New Jersey, professor Stephen Cicirelli went viral on social media after highlighting a particularly absurd example: one of his students turned in a fully AI-written paper, then followed up with an apology email that was also written by ChatGPT.
The widespread institutional response to this shift has been to fight AI with AI. Turnitin, the plagiarism detection platform used by more than 16,000 academic institutions globally, launched an AI detection feature in 2023, and competitors like GPTZero and Copyleaks quickly followed suit. School districts have poured hundreds of thousands of dollars into these tools: Florida’s Broward County Public Schools signed a three-year $550,000 Turnitin contract, while an Ohio school district pays GPTZero roughly $5,600 annually to serve 27 of its teachers.
But leading academic integrity researchers universally agree these tools are too unreliable to shape high-stakes decisions about students’ academic futures. “It’s now fairly well established in the academic integrity field that these tools are not fit for purpose,” says Mike Perkins, a leading AI and academic integrity researcher whose work is cited by the Caribbean Examinations Council (CXC) in its own AI policy framework. Perkins’ research found that top detectors regularly mislabel genuine student work as AI-generated, and their accuracy drops even further when students lightly edit AI text to read more like human writing.
Even the companies that sell these tools acknowledge their flaws. Turnitin states openly on its website that its AI detection “may not always be accurate… so it should not be used as the sole basis for adverse actions against a student.” GPTZero CEO Edward Tian has echoed that caution, saying “We definitely don’t believe this is a punishment tool.” Despite these warnings, a 2025 nationally representative poll from the Center for Democracy and Technology found that more than 40% of U.S. 6th to 12th grade teachers used AI detection tools in the 2024–2025 school year, even as most admitted the tools are flawed. A small number of leading institutions, including UCLA and UC San Diego, have taken a bolder step, deactivating all AI detectors entirely over unacceptable rates of false positives.
One of the most alarming, and most underreported, flaws of AI detection tools is their systematic bias against students who speak English as a second language—a dynamic that carries outsized risk for Caribbean learners. A 2024 study from Stanford computer scientists found that seven leading AI detectors incorrectly flagged non-native English speakers’ writing as AI-generated 61% of the time. For roughly 20% of those papers, every detector tested unanimously agreed the authentic human work was machine-produced. By contrast, detectors almost never made that mistake when evaluating writing from native English speakers.
This bias is baked into the design of the tools: AI detectors flag text as machine-generated when it uses predictable word choice and simple syntactic structure, patterns that are common in writing by non-native English speakers. “The design of many GPT detectors inherently discriminates against non-native authors, particularly those exhibiting restricted linguistic diversity and word choice,” explained study co-author Weixin Liang. For Caribbean students, this is not an abstract academic concern: across CSEC and CAPE cohorts, English is the medium of instruction, but many students go home to households where Kriol, Garifuna, Spanish, or other regional languages are the primary language. A student writing in straightforward sentence structure because they are translating their thoughts from a first language faces the same risk of false flagging as non-native speakers at U.S. and European universities.
Taylor Hahn, a Johns Hopkins University instructor who noticed this pattern in Turnitin’s flagging of international students’ work, recalled one meeting where a student immediately produced clear proof of original work—full draft notes, annotated drafts, and handwritten outlines—proving the tool had simply been wrong. Incidents like this have pushed students to take extreme measures to avoid false accusations, sparking a new technological arms race in classrooms. As detection tools proliferated, a multi-million dollar counter-industry of “AI humanizer” tools emerged almost overnight. More than 150 of these tools now exist, designed to rewrite AI-generated text to evade detection algorithms. While some are free, others charge a $20 monthly subscription, and the industry drew 33.9 million website visits in a single month at the end of 2025.
While many users rely on humanizers to cover up intentional cheating, a growing number of students who never use AI run their own original work through these tools just to avoid false accusations—a heartbreaking compromise that erodes the quality of their writing. Brittany Carr, a Liberty University student, was falsely flagged after turning in a personal essay about her own experience with a cancer diagnosis. “How could AI make any of that up?” she wrote to her professor. “I spoke about my cancer diagnosis and being depressed and my journey and you believe that is AI?” Fearing that a false finding would cost her VA educational financial aid, Carr began running every assignment through detection tools and rewriting any section the tools flagged. “But it does feel like my writing isn’t giving insight into anything. I’m writing just so that I don’t flag those AI detectors.” After the semester ended, she dropped out of the university entirely.
Back in Maryland, Ailsa Ostovitz now spends an extra half hour running every assignment she writes through multiple detection tools before submitting, just to avoid the grade penalty she received once. Turnitin has responded to the rise of humanizers by launching new “bypasser detection” features, while humanizer tools have updated their technology to mimic human keystroke patterns to defeat browser-based tracking. As one student put it: “So it’s like, how far do you want to go down the rabbit hole? I’m making myself crazy.”
While the detection arms race spirals out of control in North America and Europe, the Caribbean has taken a fundamentally different approach, centered on human judgment rather than algorithmic verdicts, and framing AI as a pedagogical resource rather than an existential threat. In a May 2026 video address titled “Who You Choose to Be,” CXC Director of Operations Dr. Nicole Manning spoke directly to regional students and teachers, delivering an unambiguous message on AI detection: “AI checkers are one input. They are not the verdict. There will be human interventions right through the process to ensure fairness.”
That policy comes in response to a gap the CXC identified across the region: a December 2024 CXC study found that roughly 70% of Caribbean nations still lack official national AI policies for education, even as AI tools have become ubiquitous in regional classrooms. That gap prompted the CXC to develop a comprehensive Standards and Guidelines framework that went into effect for the 2026 May-June examination cycle—the same sitting that thousands of students across Belize and the Caribbean are currently preparing for.
Under the CXC framework, AI is permitted for use in SBAs with clear, transparent boundaries. Students can use AI to brainstorm ideas, clarify difficult concepts, explain unfamiliar terms, or draft structural outlines for their work. They may not submit work that is generated wholly or mostly by AI, and any student who uses AI in any capacity must submit a formal disclosure form and originality report. The acceptable AI similarity threshold is set at 20%, and teachers are required to provide detailed rationale for any findings of academic misconduct for submissions that exceed the threshold.
The framework draws directly on the AI Assessment Scale developed by Mike Perkins, the same researcher who has repeatedly warned that commercial AI detectors are “not fit for purpose.” Instead of outsourcing academic judgment to unproven software, the CXC built its system around the longstanding teacher-student relationship that sits at the core of effective education. “The teacher-student relationship built over months of observation, drafts, conversations, and guidance remains central to how SBAs are moderated and assessed,” Dr. Manning explained.
CXC Registrar and CEO Wayne Wesley added that the framework requires a rethinking of longstanding assessment practices: “You have to engage students in more one-on-one conversations to appreciate whether the work they are presenting is truly their own. It also requires us to re-think how assessment is done from a summative and formative standpoint.”
At the tertiary level, the University of the West Indies (UWI), whose Open Campus serves students across Belize and the entire Caribbean, is also moving toward systemic, region-specific change. In late April 2026, UWI announced a partnership with the University of the West of Scotland to join the IntegraGuard Project, an initiative designed to build fair, transparent academic integrity systems that combine AI-assisted detection with human investigation—rather than replacing human judgment with algorithmic decisions. UWI has also finalized its own institutional Artificial Intelligence Policy Framework and launched a dedicated AI Institute at its St. Augustine Campus, designed to address the Caribbean’s unique development challenges through AI. The institute recognizes that the region cannot simply import policy frameworks designed for wealthy nations with different linguistic, historical, and educational contexts.
As CXC Director of Technological Innovation Rodney Payne put it, reflecting on the region’s coordinated approach: “For us to benefit as a region, we need harmonious development, utilising the technologies across the board. It’s not going to help us if one state moves ahead quickly and the others are struggling to follow.”
Most education experts agree that punishing students based on unreliable algorithmic findings is not a sustainable solution. Carrie Cofer, a Cleveland high school English teacher, tested GPTZero by uploading a chapter of her own PhD dissertation, and the tool labeled it 89 to 91% AI-written. “I don’t think it’s an efficacious use of their money,” she said of institutional spending on AI detection. “The kids are going to get around it one way or the other.”
Erin Ramirez, an associate professor at California State University Monterey Bay, summed up the unfair burden placed on innocent students, a reality that hits Caribbean learners particularly hard: “Students now are trying to prove that they’re human, even though they might have never touched AI ever.” For students already navigating linguistic and economic barriers that many Caribbean learners face, the extra burden of proving their authenticity to a machine is an unnecessary injustice that should concern everyone in education.
A small but growing number of institutions have already rejected detection tools entirely: the University of Pittsburgh scrapped all AI detection in 2025, concluding that false positives “carry the risk of loss of student trust, confidence and motivation, bad publicity, and potential legal sanctions.” Most global institutions have not yet followed that lead. But the CXC’s alternative framework offers a clear path forward, centered on core academic values rather than technological panic. As Dr. Manning put it: “Integrity is not about whether a machine can detect what you did. It is about who you choose to be.”