Skip to main content

Recently Asked Questions (RAQs)

To search the database of RAQs, use the site-wide search. To browse, select a subject tag on the right side of this page.

Submit a Question to Ask the Lawyer About the Service

Displaying 1 - 5 of 8
Question Submission Date
Academia, AI, and Over the Garden Wall

Faculty and students sometimes advise each other to upload articles downloaded from library-licensed databases into AI tools for summarization, or for study purposes, such as generating study questions and dialogs about the materials. These are not public domain articles that happened to be indexed in a library database.

Many of our faculty have access to ChatGPT EDU, which creates a "walled garden" around the files, preventing them from being used for AI training and treating them as institutional data. However, our students do not yet have access to the EDU account. In addition, many students and faculty are experimenting widely with other free AI tools on the Internet and are most likely uploading all types of files. I realize we cannot stop all of this, but if we have a statement to let library patrons know the proper uses, we are hopefully at least covering our obligations here.

Could you suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road?

Can Use of AI Impact Ownership and Citations in Academic Work?

I am aware that students are engaging with generative AI inside and outside of the academic setting. If they enter their own work (an essay, research paper, etc.) into Chat-GPT or CoPilot for editing, or other purposes, do they forfeit any of their intellectual property rights in doing so in ways that would affect the future publication of their work?

Additionally, are there any current legal ramifications for failing to declare the use of generative AI or failing to cite AI usage? I am aware of policy, and reputation, ramifications that can vary depending on the exact situation, so I’m specifically curious about if there are any legal repercussions for doing so.

Thank you so much for your help!

Hardening the Target In the Face of AI Bots

[This question came to use in response to the RAQ Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?, where a footnote said “It is possible we are long past the end of the “open internet,” and more things need to be restricted, both for legal and operational reasons. Hopefully we’ll get a question about that soon, because I have a lot to say.”]

Can we talk about putting things behind a log-in to avoid misappropriation of content? I have pretty much taken this question from the 10/14/25 Ask The Lawyer’s “Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?” response. It strikes me as an important topic as I recently read the Library Journal September 2025 article “AI Bots Cause Slowdowns, Crashes” (on pages 12-13).

Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?

Hello,

We have had a huge increase in AI bots on our member library websites. My concern is that internal policies linked on member websites will be “learned” by AI and linked (cited) back to that member library. I’m concerned that members might have their Emergency Action Plan in their Personnel Policy Manual, and that financial controls could be used by ransomware hackers. We go by the following list to define internal and external policies: https://nyslibrary.libguides.com/Handbook-Library-Trustees/policy-checklist

Would it be a “good practice” to not post internal policies online? If there are a few internal policies that you feel should be posted online, would it be best to say online that you have the policy, but please contact the director (or library) for the file/print copy? That way, AI won’t be trained on the policy.

Thank you!

Academic Integrity, Artificial Intelligence, and Faculty Liability

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use? Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)? Would the institution’s liability insurance typically cover individual faculty in these cases?