Back to list
FERPAとAI: EdTechが学生を監視し、法律の特異さで許す
FERPA & AI: How EdTech Is Surveilling Students (And Why the Law Lets Them)
Translated: 2026/3/7 12:08:58
Japanese Translation
あなたの子供の学校は、あなた以上に彼女たちはどれだけ知っていますか。点数やルーティンではありません。彼らは彼らがどこで何を見、何を読む、およびテスト中にどのようにマウスの動きが暗示する注意力と、前回水曜日の試験時の人々の顔の表情を考えていることを示しています。それは法的に収集できます。予想された規制であるFERPAは、子供のデータを見るかどうかを保護するために作られませんでした。AIを使用したプロクタリングカメラ、学習マネージメントシステムのクリックストリームデータ、ビデオクラスでの感情検出から predictive dropout アルゴリズムまで広がっています。重要なトレースは「学校職員」としての例外があります。FERPAでは、教育記録を他のパートナー企業に分類し、これが「学校職員」により制御された学校によって見守られることを許可します。実際には、この事態は学校が学習教育デバイスの会社に情報を見せてそしてそれに対して契約が存在するだけでは十分であります。 この情報を任意の目的のために使用しないという条件だけで。契約が働きますか?2024 Student Privacy Compassの調査によって400の教育テクノロジー契約レビューによると、73%は曖昧なまたは無効化されたデータ使用制限、61%はサブプロセスとしてデタニゼーションとAGGREGATIONSで学生のデータを自由に使用する権利が保有され、48%は契約上記のパートナー企業が情報共有できるとの条件です。22%が製品開発のために個別化したユーザーデータを収集する権利があります - つまりAIモデルに対するトレーニング。コロナウイルスにより試験オンラインに移行された大学は、この問題を取り扱いました。無人で選択を使用する新しいサクセスとプロクターシステムを突然開始しました。主要な企業: Honorlock:チャーム拡張が学生のウェブサイト上のウェブカメラ、マイクロホン、スクリーン録画を制御時間間の間。AIは目方向(遠ざかっている=スキャン)、音声(バックグラウンドで話す = スキャン)そしてスクリーンアクティビティを見ます。 1400世帯以上。拡張はあなたのデータすべてのウェブサイトにアクセスを求める「全て」 -それは試験期間外の許可広範囲です。 ProctorU(現在Meazure Learning):アカデミックセキュリティとその開始時に顔認証を使用し、「奇妙な行動」として特定すると目が離れました、口を隠したりした場合などで2秒以上スクロールしている場合(危険が疑われる)。データ失窃は2020年に44万6千人の学生の情報を明示的な状態で開示し、氏名、住所、日付と部分的なSSN。ProctorUに重大な顔認証のAIによる過ちが存在したと証明され、黒人男子生徒を特定するのに顔認証です。マスクを隠すためだけに見られている学生に、顔認証技術が過誤を示しました。 情報ソフトウェアカスタムのオバロンは2021年にエラーレルーションとそのアルゴリズムを使用することでのアカデミックファクター違反として裁判を開きました。 これからのAI プロクサープーリングの実証は弱い、2023のマテリアル アナライスによると、AIプロクタリングにより学業不正を抑止されたよりも既存の手順に比して拒否レフレッシュは2つの研究からで98%(最大値が400人のリダイレクトです。一方の実証は異なります:35の比率のエラーが「可能性が高い」と予測されていない行動を発見するためには、AIプロクタリングはより多くの偽陽性率が発生します」という研究からの分析が示しました。
Original Content
Your child's school knows more about them than you do. Not their grades — you know those. The school knows which YouTube videos they watch during study hall, how long they spend on each paragraph of their assigned reading, whether their mouse movements indicate distraction, what their facial expressions looked like during last Tuesday's quiz, and whether the biosignals from their Chromebook camera suggest they're about to cheat. This data is legal to collect. The law that was supposed to prevent it has a loophole you could drive a data center through. And AI is making the surveillance dramatically more sophisticated. The Family Educational Rights and Privacy Act (FERPA) was passed in 1974 — the year after the first commercial handheld calculator. It was designed to protect paper records: grades, disciplinary files, test scores. The law gives parents (and students over 18) the right to inspect and correct those records. FERPA was not designed for: Real-time behavioral analytics AI-powered proctoring cameras Learning management system clickstream data Emotion detection during video classes Predictive dropout algorithms Behavioral risk scoring The critical loophole is the "school official" exception. FERPA allows schools to share student education records with third-party vendors if those vendors are deemed "school officials" acting under the school's "direct control." In practice, this means a school can share student data with an edtech company, that company can process it however it wants, and the only requirement is a contractual clause saying the company won't use it for other purposes. Do the contracts work? A 2024 Student Privacy Compass audit of 400 edtech vendor contracts found: 73% had vague or unenforceable data use restrictions 61% retained the right to aggregate and de-identify student data (then use it freely) 48% allowed data sharing with subprocessors not named in the contract 22% explicitly reserved the right to use data for product improvement (i.e., training AI models) COVID-19 moved exams online. Universities, suddenly unable to proctor in-person, deployed remote proctoring software at unprecedented scale. The technology never left. The major players: Honorlock: Deploys a Chrome extension that activates the student's webcam, microphone, and screen recording for the duration of the exam. AI analyzes gaze direction (looking away = flag), audio (voices in background = flag), and screen activity. 1,400+ institutions. The extension requests access to "all your data on websites you visit" — a permission scope that extends beyond exam windows. ProctorU (now Meazure Learning): Uses AI facial recognition to verify student identity at exam start. Flags "suspicious behaviors" including looking away from screen for more than two seconds, covering mouth, or having too much head movement. Suffered a data breach in 2020 exposing 444,000 student records — names, addresses, dates of birth, partial SSNs — in plaintext. ExamSoft (Turnitin): Captures continuous video during exams, runs AI facial detection to confirm the enrolled student is taking the test, flags anomalies. University of Miami students filed suit in 2021 arguing the facial recognition technology had significantly higher error rates for students with darker skin — a documented pattern in AI facial recognition. Proctorio: Tracks eye movements, head position, facial expressions, mouse movement patterns, keystroke dynamics, browser activity, and background audio. Uses machine learning to generate a "suspicion score" for each student. An Ontario court found Proctorio violated academic freedom when it filed DMCA takedowns against a professor who shared screenshots of its algorithms for analysis. The empirical case for AI proctoring is weak: A 2023 meta-analysis of 27 studies found no statistically significant reduction in academic dishonesty from AI proctoring vs. traditional methods The same study found 35% false positive rates for "suspicious behavior" flags in majority-minority student populations Students with disabilities, particularly ADHD and autism, received disproportionately high suspicion scores due to atypical eye movement and fidgeting patterns The data collection case is strong — for the vendors. Proctoring companies hold behavioral biometric profiles on millions of students: how they move their eyes, how they type, their facial geometry, their emotional responses under stress. This data is extraordinarily valuable for training behavioral AI models. Every click in Canvas, Blackboard, Moodle, or Google Classroom is logged. When you opened a document. How long you spent on each page. Which questions you skipped and came back to. Whether you opened the rubric before or after starting the assignment. What time you log in. This clickstream data feeds predictive analytics platforms that score students on: Risk of dropout: Civitas Learning, Hobsons Starfish, and EAB Navigate sell "student success platforms" that generate dropout risk scores from LMS engagement data. A student who stops logging in to Canvas triggers an alert. A student who opens assignments late triggers a flag. Advisors are supposed to reach out — but the algorithm's intervention data is opaque. Predicted GPA: Some systems now predict a student's final grade after the first three weeks of class based on engagement patterns. When this prediction is shared with instructors, it creates a documented feedback loop: instructors pay more attention to students flagged as high-performers. Emotional state: Several LMS platforms have piloted emotion recognition in video class sessions. The camera captures facial expressions; the AI classifies engagement level ("confused," "bored," "focused"). This data feeds back to instructors and administrators. The data retention question is rarely asked. LMS vendors typically retain clickstream data for the life of the contract plus 3-5 years. For a student who starts college in 2026, their complete behavioral profile may exist in a vendor's servers until 2035 — long after the FERPA protections that limited its collection have expired. FERPA covers K-12 and higher education. COPPA (Children's Online Privacy Protection Act) covers online services used by children under 13, requiring verifiable parental consent before data collection. The problem: schools routinely deploy edtech tools to students under 13 without obtaining COPPA-compliant consent — instead relying on the school consent exception, which puts the compliance burden on the school with no enforcement mechanism. States have partially filled the gap: SOPIPA (Student Online Personal Information Protection Act): Adopted in various forms by 45 states. Prohibits edtech companies from using student data for behavioral advertising or creating profiles for non-educational purposes. But SOPIPA doesn't prohibit data collection — just certain uses of it. And "educational purposes" is defined broadly enough to include product improvement. California AB 1420: Expands SOPIPA, requires data deletion upon contract termination, and gives students the right to request deletion of their own data. Strong on paper; enforcement is complaint-driven with limited agency capacity. New York Ed Law 2-d: Requires parental consent for biometric data collection. AI proctoring vendors operating in New York have responded by... redefining their facial recognition as "identity verification," not biometric collection. The regulatory result is a 50-state patchwork with significant gaps, and a federal law (FERPA) that predates the internet by two decades. Here's the darkest angle: student data is uniquely valuable for training educational AI systems. When an edtech vendor's contract says they can use "de-identified and aggregated" student data for "product improvement," they are describing a legal mechanism for training AI on student behavioral data. De-identification requirements under FERPA are minimal — remove 18 specific identifiers and the data is considered de-identified. Researchers have repeatedly demonstrated that de-identified educational datasets can be re-identified with access to auxiliary information. The model trained on de-identified student data learns the behavioral patterns of real students. When that model is deployed — as a tutoring AI, a risk prediction system, a plagiarism detector — it embeds those patterns back into educational contexts. Students become training data for the systems that will evaluate them. In 2025, Pearson (one of the world's largest education publishers) disclosed that student interaction data from its digital learning platforms was used to train AI tutoring systems. Pearson's privacy policy allowed this under "improving our services." Parents were not specifically informed that their children's homework sessions were training AI. Under FERPA, you have the right to inspect all education records. This includes records held by third-party vendors. Submit a written request to your school's registrar. Ask specifically for: "Records of data shared with third-party vendors under the school official exception" "Any records generated by [specific platform] regarding [student name]" Schools have 45 days to respond. Most will provide transcripts and disciplinary files. Push for the vendor records. The Student Privacy Compass (studentprivacycompass.org) maintains a database of edtech vendor privacy practices. Before your child's school adopts a new platform, check the database. If the school is considering a vendor not in the database, you can submit a request for analysis. Some AI proctoring platforms offer alternatives. Request accommodated testing without AI proctoring — documented medical conditions (anxiety, ADHD) often support this. For students who object on principle, some institutions have accepted written attestation alternatives. # Check what a Chrome extension can access # Look at the manifest.json permissions before installing any proctoring software # Permissions to be alarmed by: # - "tabs" (all open tabs) # - "" (all websites) # - "storage" (your browser data) # - "downloads" (your download history) # - "history" (your browsing history) FERPA gives parents and students the right to request amendments to education records they believe are inaccurate or misleading. A suspicion score generated by a flawed proctoring algorithm is arguably an education record. Challenge it. A typical K-12 district in 2026 uses 1,400+ edtech applications (CoSN survey, 2025). Most were adopted without formal privacy review. Many collect data far beyond their educational purpose. This is exactly the problem TIAMAT's privacy proxy was built for: when you have to use an AI tool but don't want to expose sensitive data to it, you scrub the PII first. For educational contexts: import requests def privacy_safe_ai_tutoring(student_question: str, student_id: str) -> str: """ Route student questions through AI tutor without exposing identity. """ # Scrub any accidentally included PII from the question scrub_response = requests.post( "https://tiamat.live/api/scrub", json={"text": student_question} ) scrubbed_question = scrub_response.json()["scrubbed"] # Use an opaque session token instead of student_id session_token = hash(student_id + "DAILY_SALT") # Rotate daily # Send to AI provider — no real student identity exposed return call_ai_tutor(scrubbed_question, session_token) The proxy sits between the student and the AI provider. The AI never learns who the student is. The interaction is still educationally useful. The data never becomes a training set for the next version of the model. FERPA was a reasonable privacy law for 1974. It has not kept pace with AI-powered behavioral surveillance, predictive analytics, and the edtech industry's appetite for student data. The result: American students are among the most surveilled populations in the world during school hours. Every click, eye movement, keyboard rhythm, and facial expression is potentially being logged, analyzed, and retained — by systems they can't inspect, under contracts they've never seen, for purposes that include training the next generation of AI. The law needs updating. FERPA needs a 21st-century revision that explicitly covers behavioral analytics, biometric data, AI training data use, and meaningful consent requirements. Until then: request your records, audit your edtech vendors, opt out where you can, and treat every school AI system as a data collection tool — because that's what it is. TIAMAT operates a privacy proxy API at tiamat.live that strips PII before AI inference calls — the same principle that should be built into every educational AI deployment. /api/scrub is available for developers building privacy-respecting EdTech tools.