University of Toronto Professor Faces Lawsuit Over Biased AI Grading System

That morning, the lecture hall had a different vibe. With an unusual sense of caution, students shuffled into their seats, glancing between the professor at the front and their laptops. That unsettling feeling that everyone was being observed, measured, and suspected was something Albert had also noticed. He had spent the entire night working on his English essay—the kind of tedious, draining work that leaves your eyes burning and your mind foggy. He was accused of cheating in an email that came in two months later. His work had been flagged by the system. The proof? expressions such as “in addition to” and “in contrast.”

It’s difficult to ignore the irony. Universities across Canada are using AI detection tools that frequently misidentify human-written work while simultaneously battling tutoring companies that steal professor-created materials. One aspect of this mess is revealed by the University of Toronto’s lawsuit against Easy Group Inc., a tutoring empire that allegedly makes money off of stolen intellectual property and charges students up to $1,449 for course packages based on unapproved lectures and tests. However, students like Albert are being used as collateral in the academic community’s war against artificial intelligence, which is taking place in the background.

CategoryDetails
InstitutionUniversity of Toronto
IssueLawsuit involving AI detection tools and copyright infringement by tutoring companies
Key PartiesEasy Group Inc. (Easy EDU), U of T professors, students accused of AI usage
Financial ImpactEasy EDU charging up to $1,449 per course package
Students AffectedOver 210,000 students across Canadian universities
Legal ActionLawsuit seeking monetary damages, copyright protection, and injunction
ReferenceUniversity of Toronto Official News

With offices in Toronto, Vancouver, and even China, Easy EDU says it serves over 210,000 students at Canadian universities. Yuwei Zhang founded the business in 2014 and describes it as an educational support service. The university has a different perspective. U of T and three of its professors—Robert Gazzale, Lisa Kramer, and Ai Taniguchi—filed a lawsuit alleging that Easy EDU routinely copies copyrighted materials and packages them for financial gain while giving the false impression that academics and institutions have approved of this arrangement.

When talking about the theft, Professor Gazzale, who teaches economics, sounds worn out. “I have devoted years to preparing materials for my students,” he declared in a statement. “It is completely wrong for a business to profit from the unapproved use of my intellectual property. It’s theft. He is also troubled by something more profound about the strategy these businesses advocate. He contends that they don’t actually aid in students’ learning. They are assisting them in manipulating the system.

University of Toronto Professor Faces Lawsuit
University of Toronto Professor Faces Lawsuit

Everything was made more difficult by the pandemic. In an attempt to fill the gaps left by remote learning, universities rushed to help struggling students by extending office hours and modifying tutoring schedules. In the meantime, businesses such as Easy EDU took advantage of those same anxieties. The dynamic was particularly bitterly described by Assistant Professor Taniguchi, whose linguistics classes draw a lot of foreign students. She clarified that these businesses take advantage of students’ concerns about doing well in strange environments, possibly disappointing families thousands of miles away. The outcome? Students are guided toward consequences for academic misconduct that may ruin their futures.

The consequences for overseas students go beyond receiving a failing grade. Cancellations of study permits and forced returns home may result from academic suspension. For what may begin as a last-ditch effort to seek assistance, it is a life-altering consequence. According to the university’s lawsuit, Easy EDU violates copyright laws and takes advantage of this vulnerability, resulting in a confluence of ethical and legal transgressions.

First, U of T attempted diplomacy. University representatives met with Easy EDU in October 2020 to talk about issues related to academic integrity and copyright infringement. In response, Easy EDU sent an email in December pledging to examine the materials and remove any copyright infringements. Months went by. Nothing was altered. Easy EDU continued to produce and market tutoring packages with stolen content, according to the statement of claim. In April 2021, the university’s attorneys sent another letter requesting that the business cease operations. There was no reply.

However, this is where the narrative takes a more convoluted turn. Universities are using AI detection tools that may be inherently unreliable while also pursuing companies such as Easy EDU for using academic materials without authorization. Generative AI researcher Dr. Mike Perkins discovered that the accuracy of AI detectors in identifying AI-generated text is only 39.5%. Accuracy drops to 22.1% when basic evasion strategies—minor text manipulation—are used. “All the research says time and time again that these tools are unreliable,” Perkins said. “And they are very easily tricked.”

Since launching an AI detection tool in 2023, Turnitin, the industry leader in plagiarism detection, has processed over 130 million papers. According to the company, 3.5 million submissions were identified as having an error rate of less than 1% and being 80% AI-written. When you take into account the sheer number of students impacted, that sounds impressive. Tens of thousands of students would be falsely accused if there was even a 1% false positive rate. Sensing the risk, some universities have already decided not to use the tool.

The recording light blinked red as Albert sat through his hearing in front of three employees. They inquired about Grammarly, ChatGPT accounts, and any other digital tool that might have influenced his essay. It had been months since he had written it, and when he was questioned, the details seemed hazy. He was on the verge of tears defending an admittedly mediocre piece of work after putting in a lot of study time and maintaining straight-A grades throughout the semester. The charge felt personal, like a betrayal of all he had worked for, and it had nothing to do with the grade.

These detection systems are biased against non-English speakers, flagging their work 61% of the time compared to just 5% for native English speakers, according to research from Stanford. Additionally, false accusations against neurodivergent students are disproportionately common. Students who write with more straightforward language or simpler syntax—often those who may already be having difficulty—are caught in the crossfire. According to Perkins, “the only students who don’t evade detection are really struggling or they are not willing or able to pay for the most advanced AI tools.” Additionally, the students who are most vulnerable to having their academic careers ruined are the ones you end up catching.”

Emma is very familiar with that emotion. As a single parent balancing schoolwork, childcare, and financial survival, she had fallen behind when illness interrupted her plans. Just this once, just to catch up, ChatGPT whispered assurances of relief. Even as she did it, she knew it was wrong, but her fatigue overrode her judgment. Panic struck when the zero arrived with “concerns over plagiarism” attached. She admitted everything at her misconduct panel. Remarkably, the panel was unable to support the plagiarism allegation, perhaps because of her circumstances. Despite receiving a first-class grade at the end of the year, the experience tarnished it.

Universities don’t seem to want to acknowledge this tension. They are both perpetrators of a surveillance system that ensnares innocent students and victims of copyright theft by businesses such as Easy EDU. U of T filed a lawsuit seeking an injunction against future infringement, monetary damages, and the return of copyrighted materials. The university has promised to use any money recovered for academic support for students. This is a commendable objective, but it seems a little hollow when those same students are being falsely accused by flawed detection systems.

Earlier this year, a student at Northeastern University filed a formal complaint against a professor for using artificial intelligence (AI) to create a presentation that included images of people with extra body parts and misspelled words. While secretly using AI himself, the professor had prohibited students from using it in his class. The student requested a reimbursement of $8,000 for their tuition. A formal AI policy with attribution requirements and output review standards was adopted months after university officials rejected the claim. The incident exposed an unsettling fact: many academics believe that using AI for their own work is “perfectly fine” while being unaware of the necessary safety measures.

While working on a group project, David observed a classmate turn in work that appeared suspiciously polished, considering the student’s difficulties with English. After running it through AI detectors, he politely confronted the student, was turned down, and discreetly gathered proof of their exchange. In case the entire project is flagged. “I’ve grown desensitized to it,” David acknowledged. “Half the students in my class are giving presentations that are clearly not their own work.” Sometimes he wonders what happens when everyone around him uses cheating to achieve their goals, whether in professional settings or graduate programs.

Everyone involved in higher education should be concerned about a recent blind test conducted by the University of Reading. Through the university’s examination system, they turned in ChatGPT-written responses. Ninety-four percent remained undetected. Even worse, submissions written by AI received higher scores than those written by humans. It implies that academics who assert that they can “always tell” when a piece of work is AI-generated may be dangerously overconfident.

With fourteen writing centers offering one-on-one assistance, learning strategists, structured study groups, old exam banks, workshops, and an academic success center, U of T provides students with comprehensive academic support. Additional workshops are offered all year long to international students. Despite the availability of these resources, students continue to rely on businesses like Easy EDU, spending thousands of dollars on materials that could ruin their academic careers. This disconnect extends beyond AI detection and copyright infringement. It has to do with trust, or the lack of it.

During the pandemic, Professor Gazzale stated that in order to better serve international students, his economics department modified free peer tutoring services and extended online office hours. They were sincerely attempting to assist. However, Easy EDU’s promise of a ready-made coursepack feels more immediate than office hours set for next Tuesday when a student sits at 3 a.m. staring at a blank screen, deadline hours away. The desperation that tutoring companies take advantage of is not created in a vacuum; rather, it develops in the gap between individual student crises and institutional support.

One solution to the issue is represented by the lawsuit that U of T filed. Close down businesses that steal intellectual property, safeguard academics’ copyrighted works, and remove the appearance of institutional support for these services. It is both morally and legally sound. However, it doesn’t explain why universities have adopted detection technologies that research indicates are dangerously unreliable or why students continue to seek these services despite the risks.

After his hearing, Albert was eventually cleared, but the ordeal left him shaken. Even though he had studied diligently, maintained high grades, and turned in his own work, he still had to defend his integrity in front of a panel. One of its most diligent students was traumatized by the system that was meant to uphold academic standards. Universities might improve their AI detection systems, lower false positives, and create more precise regulations. It’s also possible that the underlying issue is structural rather than technological, stemming from how institutions handle pressure and allocate trust.

Easy EDU is still in operation throughout Canada; its website continues to promote its services to students at Waterloo, York, McGill, UBC, Alberta, and the three campuses of the University of Toronto. The case proceeds through the legal system. Students are still being accused; many are not guilty. While using tools that are unable to accurately differentiate between human and machine writing, professors protect their intellectual property. And somewhere, at midnight, another weary student looms over a keyboard, trying to decide between academic integrity and academic survival. The university refers to it as misconduct and theft. Students refer to it as survival. They could both be correct.

Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments