JombliGenerate policy
AI Guidelines for Teachers

Give teachers rules they can useon Monday morning.

Plain-language guidance on what teachers can do with AI, what students can do with AI, and how to handle disclosure — without reading the full policy.

This guide is for administrators responsible for giving teachers usable AI guidance. It covers how teacher-facing guidelines differ from the board-adopted policy, how to handle the per-assignment decision, what training expectations to set, and how to respond when a teacher suspects AI misuse.

Why teachers need written guidelines

Without written guidelines, teachers apply different rules to the same assignment across classrooms. Parents and students notice the inconsistency. The first time an integrity dispute escalates to the principal's office, the absence of written guidance becomes the center of the conversation.

Written guidelines do three things: they make enforcement consistent, they give teachers cover when they apply the rules, and they surface the district's stance in a form staff can actually reference. The guidelines are a translation of the adopted policy into plain language — not a replacement for it.

Teacher use vs student use

District guidelines have to address both. Teacher use of AI for lesson planning, differentiation, and accommodation is commonly encouraged and represents a different risk profile than student use. Student use is where academic integrity, disclosure, and grade-band decisions apply.

A typical district stance: teachers may use approved tools for lesson prep, rubric drafting, and accessibility scaffolds without disclosure; student-facing work (feedback, grades, communications) requires teacher review before any AI-generated content reaches the student or family. The two uses need separate rules or they conflate.

Policy Excerpt

Teacher Responsibilities · K–12 District

Jombli-generated excerpt for a hypothetical Texas district.

Protect student privacy in prompts. Do not enter student personally identifiable information (PII) into any tool not on the approved list. Use de-identified descriptors in prompts, as outlined in the Privacy section.

Complete required training. Complete AI training annually and maintain records of completion in accordance with district HR practices.

Maintain human-in-the-loop for consequential decisions. Do not issue grades, disciplinary recommendations, IEP/504 determinations, or other consequential decisions based solely on AI output without the teacher's own review and judgment.

Per-assignment decisioning

The district sets the gradient; the teacher decides per assignment within that gradient. A useful framing for teachers: "What is the learning I want to protect?" If the goal is the student's own thinking, restrict AI. If the goal is production or accessibility, allow it with disclosure.

Guidelines should give teachers a short decision pattern: identify the target skill, classify the assignment (restricted, allowed with disclosure, or AI-integrated), communicate the classification to students in writing, and hold the line consistently. The classification is explicit because ambiguous classrooms generate disputes.

Policy Excerpt

Student Use · K–12 District

Jombli-generated excerpt for a hypothetical Texas district.

At the middle school level, AI is used under teacher supervision with explicit scaffolding to support structured planning and verification. Students may not use AI on quizzes, tests, or graded assessments unless the teacher has explicitly authorized AI for that specific task and described what part of the task AI may support.

Permitted uses:

  • Outline a persuasive essay in English/Language Arts using a teacher-provided template, then draft independently.
  • Summarize scientific articles in Science, verified against two class-approved sources.
  • Generate historical timelines in Social Studies, reviewed with a peer using the annotation checklist.
  • Develop math problem sets in Mathematics, checked against the assignment rubric before submission.

Training expectations

Districts typically choose one of three training postures: required before first use, required annually, or encouraged but not required. The posture should match the district's overall risk tolerance and capacity. Required-before-use is the safest for adoption but adds onboarding friction. Annual is the most common posture for districts with existing professional development cycles.

Training content should cover: approved tools and their permitted uses, what constitutes PII and what cannot go into a prompt, the district disclosure standard, and the procedure for handling suspected student misuse. Keep it under 90 minutes for the first pass.

Privacy in prompts

The single most important teacher-facing privacy rule is about prompts. Teachers should not paste student names, grades, IEPs, behavior records, or other identifiable information into any AI tool that has not been approved for that data sensitivity level. Aggregating "my class" into anonymized context is usually fine; naming individual students is not.

Guidelines should give teachers concrete substitutions: use "Student A" instead of names, summarize IEP strategies by category rather than quoting, and keep any behavior context anonymous. When in doubt, the prompt should be shareable in a faculty meeting without concern.

Handling suspected misuse

AI-detection tools have high false-positive rates. Teacher guidelines should explicitly warn against using detection output as the sole basis for an academic integrity finding. The recommended procedure is a conference with the student, a look at drafting history (revision records, browser history if available in the LMS), and application of the district's existing academic integrity response.

Suspected misuse is handled under the district's existing discipline framework — not a new AI-specific one. Guidelines should name that framework and cite the relevant sections of the student handbook, so the teacher has one document to reference rather than three.

Onboarding new teachers

New teachers need the guidelines in the first week. Most districts fold them into new-hire onboarding alongside acceptable-use policy, FERPA training, and the student code of conduct. A one-page quick-reference printed and posted in staff workrooms, plus the full guidelines in the staff handbook, covers the dual need for glance-ability and reference.

Adoption checklist

Ten steps to move district AI guidelines from a handbook file into how teachers actually work.

  1. Publish the guidelines in the staff handbook. Do not rely on people finding an email from six months ago.
  2. Print and post the one-page quick-reference in staff workrooms.
  3. Hold a 45-minute faculty meeting introducing the guidelines, with Q&A.
  4. Identify one or two teacher champions per building to answer first-line questions.
  5. Schedule training — required before first use, annually, or encouraged, per the district’s posture.
  6. Pair the guidelines with the approved tools list and a process for requesting new tools.
  7. Put one AI case in every first-month staff meeting for the semester. Normalize the decisioning pattern.
  8. Give department chairs a subject-specific addendum. Use their language.
  9. Document a referral path for suspected student misuse. Route through the existing academic integrity process.
  10. Review at semester end. Adjust from real incidents, not hypotheticals.
On this page

Frequently asked questions

What should teachers do before using AI in the classroom?
Confirm the tool is on the district's approved list, complete required training, read the teacher guidelines, and model for students what appropriate AI use looks like on the specific assignment. Avoid entering student names, IEPs, or other identifiable information into any AI tool.
How do teachers decide when AI is allowed on an assignment?
The policy sets the district gradient; the teacher decides per-assignment within that gradient. A useful framing: 'What is the learning I want to protect?' If the goal is the student's own thinking, restrict AI. If the goal is production or accessibility, allow it with disclosure.
What about teachers using AI for their own lesson planning?
Teacher use for lesson planning, differentiation, and accommodations is typically encouraged. Teacher use for generating graded student feedback is more sensitive — most policies require the teacher to review and sign off before any AI-generated feedback reaches a student.
How should teachers handle suspected student AI misuse?
Treat it like other academic integrity issues: follow the district's existing progressive-discipline or restorative path. Do not rely solely on AI-detection tools (they have high false-positive rates). Hold a conference, look at the student's drafting history, and apply consequences consistent with policy.
Does Jombli generate teacher guidelines?
Yes — every generated policy includes a teacher guidelines section in plain language, plus a teacher quick-reference template you can print and post. The guidelines flow from the same intake you completed, so they match the policy exactly.

Hand your teachers guidelines — not a policy document.

Every generated policy includes teacher guidelines in plain language.