Cookie Preferences

We use essential cookies to keep you signed in and the Platform working. We also use analytics cookies to understand how you use LexIQ Law Tutor so we can improve it. You can accept all cookies or decline non-essential ones. Read our Cookie Policy for full details.

How should law schools handle AI and academic integrity?

Law schools need clear, nuanced AI policies that distinguish between prohibited use (submitting AI-generated work as your own), permitted use (using AI as a research starting point), and encouraged use (learning to work with AI as a professional skill). Detection tools alone are insufficient.

faculty 2 min read

The rise of generative AI has created an academic integrity crisis in legal education. Law schools must develop policies that are clear, enforceable, and forward-looking — recognising that AI is a tool students will use throughout their careers.

1. The Spectrum of AI Use

CategoryExamplesPolicy Approach
ProhibitedSubmitting AI-generated text as your own work; using AI during closed-book examsAcademic misconduct; disciplinary action
RestrictedUsing AI to generate essay outlines or research starting points without disclosurePermitted only with disclosure; deduction if undisclosed
PermittedUsing AI for grammar checking, citation formatting, brainstormingAllowed; disclosure encouraged but not required
EncouragedUsing AI as part of a "prompt engineering" assessment; critically evaluating AI outputsPart of the learning objectives; assessed

2. Why Detection Tools Are Insufficient

AI detection tools (Turnitin's AI detector, GPTZero, etc.) have significant limitations:

  • False positives: Non-native English speakers are disproportionately flagged
  • False negatives: Paraphrased or edited AI text often evades detection
  • Evolving models: Detection tools lag behind new AI models
  • Legal risk: Accusing a student of AI use based solely on a detection score is legally and ethically problematic

3. Better Approaches

  • Assessment redesign: Set questions that require personal reflection, specific case analysis, or engagement with class discussions that AI cannot replicate
  • Process-based assessment: Require students to submit drafts, research logs, and revision histories
  • Oral components: Add viva voce examinations where students must explain and defend their written work
  • AI-integrated assessments: Design tasks that explicitly require AI use, with the assessment focused on critical evaluation of the AI output

4. Developing Your Policy

An effective AI policy should:

  • Be specific about what is and is not permitted for each type of assessment
  • Require disclosure of AI use (what tool, what prompts, what was generated vs written by the student)
  • Explain the rationale — students are more likely to comply if they understand why the rules exist
  • Be reviewed regularly as AI capabilities evolve
  • Include educational components — teach students about responsible AI use, not just punish misuse

5. The Professional Dimension

Remind students that in legal practice, using AI without verification is professional negligence. The SRA expects solicitors to take personal responsibility for the accuracy of their work. Building good habits now — verifying, disclosing, and critically evaluating AI outputs — is preparation for professional life.

Key Takeaway

Law schools need clear, nuanced AI policies that distinguish between prohibited use (submitting AI-generated work as your own), permitted use (using AI as a research starting point), and encouraged use (learning to work with AI as a professional skill). Detection tools alone are insufficient.

Was this guide helpful?

Your feedback helps us improve our content for law students and educators.

Want personalised guidance?

Our AI tutor can explain any concept in detail, mark your essays, and create practice questions for your specific modules.