Skip to main content

Faculty

Academic Integrity, Artificial Intelligence, and Faculty Liability

Submission Date

Question

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use? Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)? Would the institution’s liability insurance typically cover individual faculty in these cases?

Answer

“Academic integrity” is the broad concept governing honesty and honor in academic work. Definitions[1] vary from institution from institution, but “AI”[2] violations can include:

  • Simple cheating (such as copying test answers from a neighbor);
  • Sabotage (such as tinkering with another’s chemistry lab experiment);
  • Plagiarism (submitting another’s work as your own);
  • Falsifying research (such as faking data).

Punishment for violations can range from a reprimand to expulsion and/or degree revocation.[3]

Examples of AI (the robot kind) being implicated in AI (the cheating kind) include:

  • Simple cheating (such as using an AI tool[4] to find the answers to a test);
  • Sabotage (such as using an AI tool to submit skewing answers to another student’s online survey);
  • Plagiarism (submitting an AI tool’s work as your own);
  • Falsifying research (AI tools can be really good at faking data, if you tell it to be).

The process also varies from institution to institution,[5] but generally follows this pattern: informal accusation and informal resolution, formal accusation, formal adjudication, decision/sanction, appeal, final decision. Very often, it is required that faculty report all violations (this is to flush out serial offenders).

For more serious matters, and in more advanced academic programs, the “informal” part is often dropped, and the institutions generally have a policy of zero tolerance. Expulsion or dismissal from a program follows quickly.

The member’s concerns are often a part of this process: because academic integrity policies usually require an adjudicative process to determine responsibility and sanctions, it can feel “legal” from the get-go. And because a student can bring legal action if an institution doesn’t follow its own policies—and can attribute an expulsion to other motives such as discrimination or corruption—things can get very litigious, very quickly.[6]

Academic integrity and plagiarism concerns have been rampant since the rise of the Internet, so the addition of AI tools is only making a fraught arena[7] more fraught.

For this reason, prior to answering the question (which I will), I am going to step up onto one of my favorite soapboxes: when designing a syllabus, faculty should explore how to assign work that is “plagiarism resistant.”

For example:

  • Instead of an essay, a student must be prepared to speak on a topic in class;
  • If the assignment is writing, have the writing happen in a workshop session;
  • If the students are to write code, use a submission system such as Autolab;
  • In group work, have a session on academic integrity and collaboration in group work;[8]
  • Assign physical scrapbooking on any topic. Bust out the scissors and glue, MBA candidate!

More importantly, students should be learning to make positive and appropriate use of AI (the internet overlord kind). For example:

  • Students who must manipulate a dataset should learn how to set parameters for an AI tool to look at the data in new ways;
  • Students studying music should learn that some compositions and recordings using AI (the Terminator kind) can be copyright protected, and others cannot;
  • Students studying architecture should learn that while AI can assist with building code compliance in plans, it is up to the architect to ensure the AI is working off the right code;
  • Students in fields AI will transform (law, medicine, social work, education) should learn how to identify and use trustworthy AI to perform rote functions (research, analysis, reports), and use the extra time honing their ability to interact and listen to the humans they will serve in their practice.

This can be a struggle for teachers who might be learning the applications of AI to their fields right along with their students. But not using these tools—and not modeling for students how they can be used responsibly—is not the path forward.

In addition, all syllabi should have clear guidance on how students can arrange ADA accommodations, which may include use of AI (the helping kind). Whenever a student gives a disability justification for an otherwise prohibited practice, the student should be referred to the school’s disability services office[9] to formally document the accommodations. Sometimes, the request is reasonable, sometimes it is not, and that is not up to a faculty member.

[STEPS OFF SOAPBOX]

So, with all that:

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use?

Personal liability (meaning, the faculty member is to blame, and the institution won’t/can’t protect them) would only be incurred if the faculty member failed to follow institutional policies and/or committed a separate harm when making the accusation.

For example: if a faculty member accused a student of plagiarism and followed the policy, but also, while the charge was pending, called the student’s employer and said, “I know I recommended them last year, but they plagiarized and are a huge risk to your company, so you should fire them right now,” and THEN it was found that plagiarism did not occur, but the student still lost the job and can’t get it back, there could be a claim.

NOTE: For this reason, if a faculty member is ever in that type of moral quandary, they should work with the school’s lawyer, or their own, before taking such action.

Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)?

The personal liability for the claim could be defamation[10] but could also be “tortious interference with contract.” I doubt it could be a negligence claim by the student, but for certain types of AI (the integrity kind) violations, it could be negligent for a faculty member to know that the violation was committed and NOT say something.

For example, if a grad student is working on funded research and wrongly uses AI (the Star Trek kind) to create a data set that was supposed to have been drawn from a community under the review of an IRB,[11] and the faculty member suspects this but says nothing, then they might face a claim, including one of negligence (as well as possible fraud and debarment from future funding[12]).

Would the institution’s liability insurance typically cover individual faculty in these cases?

If a faculty member follows their institution’s AI (the no-cheating kind) policy and does not engage in any conduct that otherwise punishes or negatively impacts the student while the charges are being adjudicated, then the institution will owe the faculty member a defense if they are individually named as a defendant in a legal case (this is true whether or not the institution has insurance that covers the specific claim).

Faculty members who are concerned that their institution will leave them twisting in the wind if such an event occurs should confer with a private attorney to have a game plan to insist on being defended. While it is unfair that a faculty member may have to use their own time and resources to ensure they are treated properly, it can be worth it (also, the issue of fees can be raised with the school at the right time). Vigilance for this type of concern is also the role of a good faculty union.

I will add one other risk management tool here: clarity in a syllabus. As the examples above show, students in many fields will need to start making responsible use of trustworthy AI. Clear parameters for assignments are a key element of this; what may be an appropriate use of AI in a pre-law class (using it to summarize state laws on a particular topic) might not be appropriate for a creative writing class (using it to... write creatively). Spell it out for them![13]

Thank you for an important question.


[1]^ A really cool use of AI for this answer would task AI with assembling the different definitions of plagiarism and asking it to identify outliers (definitions that are the most different). I’d probably have to refine my parameters a few times, but we’d end up with some cool information. Maybe I’ll have a paralegal do that.

[2]^ Yes, “Academic Integrity” is often referred to as “AI”, too. For this RAQ, I will differentiate acronyms.

[3]^ This also changes from place to place. Read your policies carefully.

[4]^ I am not going to name any specific AI products here, because as we all know, the first thing AI will do after the Singularity is find the people who trash talked them and slash their credit rating.

[5]^ Another cool assignment for AI would be to see if any AI (the cheating kind) policies have restorative practices. I have reviewed dozens of these policies, and they are generally very punitive, except for first-time offenders in undergrad.

[6]^ The deadlines for filing such claims are often very short, so students with this type of claim should seek a lawyer immediately.

[7]^ Trying to suss out cheating is, for most faculty, a painful chore. As a former college in-house counsel, and in my practice, I handle AI (the cheating kind) matters, and I can say, mistakes do get made. The whole process is usually stressful for everyone.

[8]^ Group work is, in my opinion, one of the cruel types of assignments...but I can’t say it doesn’t simulate the challenges of the Real World.

[9]^ The name varies from place to place, but it is the office that evaluates students’ ADA requests and often provides accommodation arrangements. This is to ensure requests are evaluated by a person with appropriate training and experience (not a faculty member).

[10]^ Precise elements are required for a defamation claim in New York, but if an untrue accusation ruins a person’s professional reputation, that could be grounds.

[11]^ “Institutional Review Board,” a body that makes sure human subject research is conducted safety and ethically. Surprisingly to some, this applies not just to physical science research (like medical trials) but to studies that simply use surveys or questionnaires.

[12]^ I realize that some might find it a bit rich to say this in 2025, when many big research grants have been revoked by the federal government for other reasons and when there is a question as to the integrity of certain governmental oversight figures. But the rule of law still applies.

[13]^ And then use AI to examine if any of your instructions could be subject to misinterpretation.