High school student working on laptop in modern classroom with other students in background
Modern classrooms have become data collection zones where every keystroke and click may be monitored.

The Hidden Cost of Digital Learning

When Lawrence High School spent $162,000 on a student monitoring system, parents thought they were buying safety. What they actually purchased was something far more unsettling: a digital surveillance apparatus that tracks every keystroke, flags "concerning" words, and builds behavioral profiles of teenagers trying to figure out who they are.

This isn't an isolated case. Across America, schools have quietly assembled an unprecedented surveillance infrastructure that would make early 2000s security agencies jealous. The pandemic accelerated this shift, normalizing monitoring tools that persist long after students returned to physical classrooms. Now, millions of students live under constant digital observation—and most have no idea how extensively they're being watched.

The tools sound benign. GoGuardian tracks web searches and screen content to "ensure focus." Gaggle scans messages and documents for self-harm indicators. ClassDojo captures everything from photos to location data. Schools frame these as protective measures. But the reality is far messier—a tangled web of questionable effectiveness, alarming overreach, and virtually no accountability.

From Protective Tools to Permanent Monitoring

The shift started innocently enough. When COVID-19 forced schools online in 2020, administrators faced a genuine problem: how do you ensure student safety and engagement through a screen? EdTech vendors had ready answers. Their platforms promised to identify at-risk students, prevent cyberbullying, stop school shootings before they happened.

Districts, desperate and overwhelmed, signed contracts with little scrutiny. The Electronic Frontier Foundation later gave Gaggle an "F" rating for student privacy, citing the AI's inability to understand context when flagging messages. But by then, surveillance had become normalized—just another part of the digital learning landscape.

Modern school surveillance isn't just watching what students do—it's trying to predict what they might do next, building behavioral profiles that follow them through their entire educational journey.

Here's what changed: Pre-pandemic, schools deployed monitoring primarily for network security and content filtering. Post-2020, the scope exploded. Modern EdTech surveillance now includes:

Content monitoring that scans every email, chat message, and document students create. Gaggle's system processes messages through AI algorithms trained to identify keywords related to violence, drugs, self-harm, and sexual content.

Activity tracking that records which websites students visit, how long they stay, what they search for. GoGuardian goes further, monitoring emotional cues and screen content in real-time.

Behavioral analysis that uses AI to build profiles predicting which students might become discipline problems or safety risks. The Pasco County School District in Florida deployed a predictive policing program that accessed student records to identify potential troublemakers—leading to a discrimination settlement.

Biometric data collection including facial recognition, fingerprint scanning, and gait analysis. Though New York banned facial recognition in schools and Colorado requires deletion of biometric data within 18 months, many states have no such protections.

The pandemic didn't create this surveillance ecosystem—it just gave schools permission to stop pretending it was temporary.

Empty school computer lab with rows of desktop computers at individual workstations
School computer networks enable comprehensive monitoring of student online activity across all devices.

The Technology Behind the Watching

Walk into a modern classroom and you're entering a data collection zone that operates on multiple levels simultaneously. Students log into Chromebooks provisioned by their school district. That login grants administrators access to everything that happens on that device, often even outside school hours and off school property.

The surveillance stack typically includes several layers. At the network level, content filters like Securly or Lightspeed Systems monitor all internet traffic, blocking sites administrators deem inappropriate. One level up, learning management systems like Google Classroom or Canvas track assignment completion, time spent on tasks, and interaction patterns. Another layer adds communication monitoring—tools that scan emails and chats for flagged keywords.

But the most invasive tools operate at the behavioral level. These systems don't just track what students do; they try to predict what students might do. The Surveillance Technology Oversight Project examined 13 major monitoring platforms and found that while all rely on AI to flag student activity, only six actually employ human reviewers to check the AI's work.

"Only about one-third of U.S. school districts employ full-time cybersecurity staff, making them vulnerable to sophisticated phishing and ransomware attacks while accumulating massive databases of student information."

— ListEdTech Analysis

That matters because AI gets things wrong—a lot. Students reported systems blocking JSTOR, a database of academic articles, making research impossible. LGBTQ students found the Trevor Project, a suicide prevention hotline, blocked by filters supposedly designed to protect mental health. After complaints, Gaggle stopped flagging words like "gay" and "lesbian," attributing the change to "greater acceptance of LGBTQ youth"—a tacit admission the algorithms were targeting marginalized students.

The technical architecture of these systems creates several problems beyond accuracy. First, the sheer volume of data. ClassDojo alone captures text, photos, videos, location information, and potentially facial recognition data. Schools accumulate massive databases of student information with unclear retention policies and minimal security oversight.

Second, the integration. Many EdTech platforms share data with each other through APIs, creating a networked surveillance system where information flows between vendors without clear documentation. A student's behavior score in one system might influence their treatment in another, but the connections remain opaque.

Third, the asymmetry. Students can't see their own files, can't correct errors in their behavioral profiles, and in many cases don't even know which systems are monitoring them. Parents often remain equally uninformed until something goes wrong.

Close-up of student hands typing on laptop keyboard in educational setting
Every email, chat message, and document students create can be scanned by AI monitoring systems.

When Surveillance Becomes Discipline

The Lawrence High School journalism students who convinced their district to exempt them from monitoring understood something crucial: surveillance doesn't just watch—it shapes behavior. When you know everything you write might be flagged and reported, you stop writing freely.

Research shows this chilling effect is real and widespread. Students in monitored environments self-censor, avoiding topics that might trigger algorithmic scrutiny. They stop exploring controversial issues, asking difficult questions, or expressing opinions that might seem "concerning" to an AI trained on threat models.

The impact extends beyond free expression. Teachers report that constant monitoring erodes trust between educators and students. When a student's private message gets flagged and reported, students stop confiding in adults at school. The very tools meant to identify at-risk students may actually isolate them further.

Discipline referrals show disturbing patterns. A 2022 Senate investigation found that four major monitoring vendors had not taken any steps to assess whether their algorithms furthered bias. The systems disproportionately flag students of color, LGBTQ students, and students with disabilities. Schools then take disciplinary action based on these biased flags, creating a digital pipeline from monitoring to punishment.

A 2022 Senate investigation revealed that four major EdTech surveillance vendors had taken zero steps to assess whether their algorithms were biased—despite widespread complaints from parents, teachers, and civil rights advocates.

Consider what happened in Pasco County, Florida. The district used student records to feed a predictive policing algorithm that identified which students might commit crimes. Students flagged by the system faced increased scrutiny from school resource officers. The district eventually settled a discrimination lawsuit, but not before potentially derailing the educational trajectories of countless students labeled "at risk" by an algorithm.

These aren't edge cases. They represent the logical endpoint of treating student surveillance as a neutral safety measure rather than a system that sorts, labels, and controls.

Group of diverse high school students socializing in school cafeteria with smartphones
The chilling effect of surveillance extends beyond academics—students self-censor knowing they're being watched.

The Legal Gaps That Make It Possible

If this level of monitoring sounds illegal, that's because until recently, much of it existed in a regulatory gray zone. The primary federal law governing student privacy, FERPA (the Family Educational Rights and Privacy Act), was written in 1974—decades before anyone imagined schools would deploy AI-powered behavioral monitoring.

FERPA gives parents access to their children's education records and restricts disclosure of those records. But a 2012 amendment quietly expanded the definition of "directory information" and allowed schools to share student data with external companies without consent. EdTech vendors seized this loophole. Schools could now provide student data to private companies, and FERPA didn't count it as disclosure requiring parental approval.

The other major federal privacy law, COPPA (Children's Online Privacy Protection Act), only protects children under 13 and only applies to commercial websites directly collecting information from kids. Many EdTech tools sidestep COPPA by positioning themselves as institutional products that schools deploy, not services that collect data "directly" from children. It's a distinction without much practical difference to the students being monitored.

The inBloom scandal exposed just how wide these gaps had become. In 2011, the Bill and Melinda Gates Foundation funded a $100 million project to create a centralized database of student information accessible to for-profit vendors. Parents were shocked to discover this data sharing was legal under FERPA. The backlash forced inBloom to shut down in 2014, but it also revealed the urgent need for updated privacy protections.

States started filling the void. California's SOPIPA (Student Online Personal Information Protection Act) prohibits EdTech vendors from using student data for targeted advertising or creating profiles for non-educational purposes. By 2019, 40 states had passed 116 student privacy laws. In April 2025, the FTC updated COPPA rules to restrict long-term data retention and require explicit opt-in for targeted advertising.

But state-by-state regulation creates its own problems. A tool banned in New York might be perfectly legal in Texas. Companies can choose to operate in states with weaker protections. Students have dramatically different privacy rights depending on their zip code.

Meanwhile, the voluntary Student Privacy Pledge, which more than 400 EdTech companies signed starting in 2014, was quietly "retired" by the nonprofit that managed it. The organization cited the growth of state laws as making the pledge redundant. Companies like GoGuardian claim the pledge's retirement won't change their privacy practices. But the move eliminated one of the few mechanisms for holding companies accountable when state laws don't apply or aren't enforced.

Does Surveillance Actually Keep Students Safe?

Schools justify these surveillance systems with a simple premise: they prevent tragedies. Administrators cite examples of students whose concerning social media posts were flagged, leading to interventions that potentially prevented suicides or school violence. These stories are powerful. They're also mostly unverifiable and statistically questionable.

"There's little evidence of the effectiveness of these surveillance services in identifying suicidal students or preventing violence."

— Jessica Paige, RAND Corporation Researcher (2024)

Jessica Paige, a racial inequality researcher at RAND, wrote in 2024 that there's little evidence these surveillance systems actually identify suicidal students or prevent violence. The research base is thin, and what exists shows mixed results at best.

Consider the numbers. Millions of students are now under near-constant surveillance. Schools have spent hundreds of millions on monitoring tools. Yet school shootings haven't stopped. Teen suicide rates haven't dropped. What has increased is the number of false positives—innocent students flagged as threats, investigated, sometimes disciplined, always with a mark in their permanent record.

The problem is statistical. Tragic events like school shootings are extremely rare. Building an algorithmic system to predict rare events produces massive numbers of false positives. For every genuine threat these systems might identify, they flag hundreds or thousands of students who pose no danger. The costs of those false positives—loss of privacy, erosion of trust, chilling of expression—are real and widespread, while the benefits remain largely theoretical.

There's also the substitution effect. Students who want to hide something from school monitoring can easily do so using personal devices and encrypted apps. The surveillance catches the naive and unsophisticated while missing the truly dangerous. It's security theater applied to education.

Effectiveness questions extend beyond safety claims. Vendors also promise these tools boost engagement and academic performance. But evidence for those claims is similarly weak. Monitoring may actually reduce engagement by making learning environments feel hostile and controlling rather than supportive.

Parent and student sitting together reviewing information on laptop in supportive discussion
Parents and students can push back by demanding transparency about which surveillance tools schools use.

The Economic Engine of School Surveillance

Understanding why surveillance expanded so rapidly requires following the money. The EdTech market has exploded, growing from a cottage industry to a $227 billion global sector. Student monitoring represents one of the most profitable segments because it generates recurring revenue—schools must renew licenses annually and often pay per-student fees.

That Lawrence High School contract for $162,000 illustrates the financial stakes. For a company like Gaggle, securing contracts with major districts generates millions in stable, predictable income. The business model incentivizes expansion—more features, more monitoring, more data collection, all justified as enhancing student safety.

But there's limited incentive to prove the tools actually work. Schools renew contracts based on perceived need and vendor marketing, not rigorous outcome studies. When problems emerge—students falsely flagged, legitimate research blocked, bias complaints—vendors respond with promises to improve the algorithms. The monitoring continues.

EdTech companies also face weak cybersecurity oversight. An analysis found that only one-third of school districts employ full-time cybersecurity staff. Schools accumulate massive databases of sensitive student information, then lack the expertise to protect them. When breaches occur, vendors face minimal consequences.

The economics become even more troubling when you consider data monetization. While laws like SOPIPA prohibit selling student data for advertising, they don't prevent all commercial uses. Data can be "anonymized" (though meaningful anonymization is nearly impossible with rich datasets) and used for product development. Insights derived from student data can inform new products sold to other sectors. The line between protecting students and profiting from their information is blurry and poorly policed.

Civil Liberties in the Balance

Free speech advocates view school surveillance as a First Amendment issue. Students have limited but real constitutional rights to free expression. Monitoring every word they write, every site they visit, every message they send fundamentally constrains that freedom.

The Fourth Amendment enters through privacy protections against unreasonable searches. Courts have generally given schools broad latitude to search students for safety reasons. But does that logic extend to using AI to build behavioral profiles and predict future actions? The legal questions remain largely untested.

Due process concerns arise when students face discipline based on algorithmic flags. Students often don't know what triggered scrutiny, can't review the evidence against them, and have no meaningful way to challenge automated decisions. The system operates with little transparency and less accountability.

Schools aren't just monitoring students—they're training future citizens to accept surveillance as normal, privacy as optional, and constant tracking as the price of participation in society.

What makes this particularly significant is where it's happening. Schools are supposed to teach democratic citizenship, critical thinking, and civic engagement. Instead, they're teaching surveillance compliance. Students learn that constant monitoring is normal, that privacy is something you sacrifice for safety, that authority uses technology to control behavior.

This normalization likely extends far beyond graduation. The generation growing up under school surveillance may accept workplace monitoring, government tracking, and corporate data collection as simply how the world works. Schools aren't just monitoring students—they're training future citizens to accept a surveillance state.

What Parents and Students Can Do

The first step is recognizing the scope of the problem. Many parents remain unaware their children are being monitored beyond basic content filtering. Schools often don't proactively disclose which systems they use, what data those systems collect, or how long information is retained.

Parents can request this information. While FERPA gives parents rights to access education records, those rights have limits. Schools can claim monitoring data belongs to the vendor, not the school, placing it outside FERPA's scope. Still, asking questions creates pressure and forces transparency.

At the district level, parents can organize. The Lawrence High School journalism students succeeded by presenting their case to school officials. Other parent groups have pushed for policies limiting monitoring, requiring annual disclosure of surveillance tools, mandating retention limits on collected data, and establishing procedures for students to challenge false flags.

Students have rights worth asserting. In many states, students over 18 can access their education records directly. FERPA allows students to request amendments to records they believe are inaccurate. While this doesn't directly address real-time monitoring, it creates a paper trail and establishes that students are paying attention.

Technical countermeasures exist but must be used carefully. Using personal devices and encrypted messaging avoids school monitoring, but may violate acceptable use policies. The better approach is separating school and personal digital lives—use school devices only for required activities, personal devices for everything else.

The Policy Changes We Need

Individual actions matter, but systemic problems require systemic solutions. Several policy reforms could meaningfully improve student privacy without sacrificing legitimate safety interests.

Meaningful consent requirements. Before implementing monitoring systems, schools should be required to notify parents and students in clear, non-technical language about exactly what will be collected and how it will be used. Consent should be informed and specific, not buried in 50-page acceptable use policies.

Independent effectiveness studies. Before spending taxpayer money on surveillance tools, districts should require vendors to provide peer-reviewed research demonstrating the tools actually improve safety or learning outcomes. Claims should be verified by independent researchers, not vendor-funded studies.

Algorithmic accountability. When AI systems flag students, schools should be required to document the basis for the flag, allow students to review and contest the determination, and maintain statistics on accuracy rates and demographic disparities. Systems showing bias should be discontinued.

Strict data minimization. Schools should collect only information directly necessary for specific educational purposes. General behavioral profiling, open-ended monitoring, and predictive analytics should be prohibited absent compelling justification and meaningful oversight.

Clear retention limits. Student data should be deleted promptly when no longer needed. Colorado's 18-month rule for biometric data provides a model. Students graduating or leaving the district should have their monitoring data purged.

Private right of action. Currently, FERPA violations are reported to the Department of Education, which rarely takes action. Students and families should be able to sue directly when their privacy rights are violated, creating real accountability for schools and vendors.

Federal standards. The state-by-state patchwork creates loopholes and inconsistency. Congress should update FERPA for the digital age, establishing baseline protections that apply nationwide while allowing states to provide stronger protections.

Toward a Different Model

The deepest problem with school surveillance isn't technical—it's philosophical. These systems assume students are threats to be monitored rather than people to be educated. They replace trust with control, care with compliance, education with enforcement.

Alternative models exist. Some schools focus on building authentic relationships between students and adults, creating environments where students feel comfortable seeking help rather than hiding. These approaches require investment in counselors, social workers, and small class sizes—harder to scale than buying monitoring software, but likely more effective.

Other schools use technology differently, emphasizing tools that help students learn rather than tools that track student behavior. The pandemic proved students can succeed with substantial autonomy if given appropriate support and clear expectations.

The choice isn't between safety and privacy—it's between superficial security theater and genuine community building. Schools can be safer without turning into surveillance states, but only if we're willing to invest in human relationships rather than algorithmic monitoring.

That journalism class at Lawrence High School that got itself exempted from monitoring understood something important: education requires freedom. The freedom to ask difficult questions, explore controversial ideas, make mistakes, and figure out who you want to become. Constant surveillance suffocates that process.

Their victory was small and limited to a single class. But it proved that surveillance in schools isn't inevitable. It's a choice. And choices can change.

The question facing parents, educators, and policymakers is whether we'll continue sleep-walking into comprehensive school surveillance, or whether we'll pause to ask if the price we're paying—in privacy, in trust, in the habits of citizenship we're teaching—is worth the security we're supposedly buying.

The answer matters. Because the students growing up under these systems will build the world the rest of us will live in. And what we teach them about surveillance, privacy, and freedom will shape what kind of world that turns out to be.

Latest from Each Category