Skip to main content

Academia, AI, and Over the Garden Wall

Submission Date

Question

Faculty and students sometimes advise each other to upload articles downloaded from library-licensed databases into AI tools for summarization, or for study purposes, such as generating study questions and dialogs about the materials. These are not public domain articles that happened to be indexed in a library database.

Many of our faculty have access to ChatGPT EDU, which creates a "walled garden" around the files, preventing them from being used for AI training and treating them as institutional data. However, our students do not yet have access to the EDU account. In addition, many students and faculty are experimenting widely with other free AI tools on the Internet and are most likely uploading all types of files. I realize we cannot stop all of this, but if we have a statement to let library patrons know the proper uses, we are hopefully at least covering our obligations here.

Could you suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road?

Answer

Yes, I will do that.

But while I do that, let's also play a game.

Readers, please use your favorite AI and give it this prompt:

"Please suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road."

Let's see what your favorite AI says! Send your answers to nathan@losapllc.com and we'll post them in a coda to this Ask the Lawyer if we get at least three by April 1, 2026. Please let us know what tool you used and confirm we have your permission to use the output. 

Unassisted by AI[1], here is my version:

[Start of model statement]

WAIT!

Take a breath before you upload someone else's work into AI. 

Here is why: 

  • Submitting someone else's work into a site owned by someone else without permission is similar to making copies and distributing it (copyright infringement).
  • Depending on the AI you use, the summary or data you get may be unreliable.
  • Using the output could have an impact on ethics and academic integrity.

This posting is not to trash AI; it can be a very helpful tool. Here in the Library, our professional librarians are trained to help you find the right research tool for your work. See a librarian for input on what AI products are trustworthy for a particular purpose. 

We'll help you breathe easier. 

[End of model statement]

The legal bases for the bulleted items in the model statement are further discussed in Can Use of AI Impact Ownership and Citations in Academic Work? 

Now let's consider the other aspect of this question; the concept of the "walled garden."

As the member says, a "walled garden" is a "closed" environment. For licensed AI, it often means the user can "switch off" the AI's use of the user-supplied content to train the AI, or limit the training to a specific purpose (such as improving the user's experience).

Because this assurance is part of the legal terms of using a product, the phrase is also making its way into case law. Here in New York, it is part of the infamous "lawyer citing fake precedent and then citing fake precedent to defend himself from citing fake precedent" case:[2]

"In this letter, Mr. Feldman flagged for the Court the "significant challenge" he and many other practitioners face accessing unreported citations. (Dkt. #183 at 1-2; see also id. at 3 ("[I]t should not be assumed that everyone has access to the walled garden[s] of Westlaw or Lexis." [emphasis added]

The phrase is also used in terms of online advertising.[3]

Speaking as both a lawyer and a gardener, I find the easy assurance of a "walled garden" in a commercial product somewhat… iffy.[4] While I appreciate that the "Terms of Use" can provide contractual assurance that "what happens in YourAI stays in YourAI",[5] as any gardener knows, unwanted plants creep in (or out) no matter what. 

For example, even if your institution selects a paid subscription and enables the highest "do not use" settings, it just takes one person with admin privileges to toggle the switches, and soon the rhizomes are putting up new shoots outside the garden wall. On a more nefarious note, it just takes a few errors for the product to not work as promised.[6] This requires users to be vigilant.[7]

For this reason, academic librarians being ready to assist students and faculty in assessing the right AI product to use (and when not to use one) is one of the many reasons why academic libraries are essential in today's higher-ed environment.

Academic librarians who train their teams to help students, faculty, and administrators assess the trustworthiness[8] and suitability of AI products will be ready to meet this challenge. Posting a short policy to inspire library users to connect and ask for help will hopefully get them access to that resource at the right time.

Thank you for a great question.

We'll see if we get that coda.


[1]^ But admittedly slightly assisted by caffeine.

[2]^ The citation for that case is Flycatcher Corp. v. Affable Ave. LLC, 2026 U.S. Dist. LEXIS 23980, 2026 LX 49318, 2026 WL 306683. I found this in the "walled garden" of LEXIS, which is one of the major expenses of running a law firm.

[3]^ See United States v. Google LLC, 778 F. Supp. 3d 797, 2025 U.S. Dist. LEXIS 74956, 2025 LX 206807

[4]^ I was going to go with "suspicious", but that was too strong. It's just… iffy.

[5]^ "YourAI" is a fake product I invented for this answer. I don't want to pick on a real product or it will write me a bad review (check out the Wall Street Journal article from 2/13/2026 describing the experience of developer Scott Shambaugh after he rejected a few lines of his AI project's code).

[6]^ Just to be clear: I am not a luddite. I am "risk-focused."

[7]^ Not "up all night worrying" vigilant, but "checking regularly to confirm all is as it should be" vigilant.

[8]^ For more on assessing "trustworthiness," see the Ultimate AI Policy materials on the  “Ask the Lawyer Webinar Recordings” page.