Skip to main content

AI

Privacy concerns related to software monitoring of public school library records

Submission Date

Question

I am a school librarian, and just found out my school district is using student-device monitoring software. The software uses AI to check for searches and content that could indicate consideration of self-harm. I am concerned the software will monitor access to school library content and violate student privacy. What can I do? 

Answer

This is a very serious concern.

Use of such monitoring software (such as GoGuardian’s Beacon, Google’s Gaggle, and others) is growing rapidly.[1]

Each technology works differently, but the common function is constant monitoring of searches and content on student devices, to be alert for signs of potential danger.  When such potential signs are detected, both AI and real people are used to provide further assessment and intervention.

Deployed properly,[2] such software has been shown to be somewhat effective.[3]  But in New York State, as of January 22, 2025, it seems to have been deployed without much overt consideration[4] of a student’s right to confidentially use the school library.

A student’s right to privacy when using a school library is built into governing ethics, educational standards, law, and regulations.[5] It is often also assured by the policies of a particular school district.[6]

As is often the case with rapidly developing technology, it looks like the adoption of the tech may be outpacing the consideration of all relevant legal factors, including how such software will be programmed to not violate the private use of the school library for research and information access.

In the K-12 environment, this is a delicate balance.  While schools are allowed to access student education records[7] and library records[8] under particular circumstances, the wholesale monitoring of such records is a violation of the law and the ethics of library privacy. In addition, it is quite possible that students will research or access school library e-content that may “trip” the search terms, and, without a careful effort to exclude library searches and content, the software could yield a false positive… along with a privacy violation.

Where does this leave school librarians?

Since the way this plays out may change from software to software and from district to district, and different districts are in different phases of considering or using such software, it is hard to say. Below is an array of possible actions a school librarian can take to raise a concern:

Phase of ConcernTemplate language to report concernConsiderations
1. School is considering use of student device monitoring software but hasn’t purchased it or passed a policy about using it.

Sample language for raising the concern: “As the product is sourced, please include specific language to ensure the device does not monitor the use of library services. As a policy is developed, it should affirm that library searches and content are not monitored.”

Sample language for the procurement: “Product must be able to assure exclusion of school library searches and library-obtained content from searches and reports.”

Build a supportive team[9] to:

Ask to see the procurement documents before the RFP is issued.

Stay engaged as a policy is developed.

Know how the content is being monitored and who the response team at the district is.

2. School is already using student device monitoring software, there is no policy requiring library services not be monitored, but no incident is known of.

 

At supervisor or IT level: “It has come to my attention that the school is using [SOFTWARE NAME]. Because student library records are confidential by law, it is important that any monitoring software expressly excludes use of library services (searches and content access via the library) or is otherwise respecting the privacy of student library records. I am also concerned library content could yield false positives. How is our district addressing that?”Prior to raising such a concern, just like in “1,” above, it is wise to build a supportive team.
3. The request in “2,” above is not answered satisfactorily.To the Superintendent or School Board attorney: “It has come to my attention that the school is using [SOFTWARE NAME]. Because student library records are confidential by law, it is important that any monitoring software expressly excludes use of library services (searches and content accessed via the library) or is otherwise respecting the privacy of student library records.  I am also concerned library content could yield false positives.  How is our district addressing that?”

Prior to raising such a concern, just like in “1,” above, it is wise to build a supportive team.

If possible, having a person from that team raise the issue may be a more comfortable (and effective) approach.

4. No policy is in place, the software is in use, and a possible library privacy violation is detected.

Make an internal complaint: “It has come to my attention that the school is using [SOFTWARE NAME], and on [DATE], a student’s library search history was accessed.

Because student library records are confidential by law, it is important that any monitoring software expressly excludes use of library services (searches and content access via the library).  Can we address this issue and ensure the program excludes these materials from searches in the future?”

Prior to raising such a concern, just like in “1,” above, it is wise to build a supportive team.

In this case, the school librarian can raise the issue, but it is very wise to have back-up.

5. Library privacy violation reported and the internal complaint was not responded to meaningfully.

File an external complaint to NYS Education Department’s Chief Privacy Officer.[10] 

It is wise to work with allies when crafting this, and to have legal advice[11] if possible.

This should include a copy of the internal compliant, so the Chief Privacy Officer knows your district had an opportunity to address this issue itself.

The link to report to the NYSED Chief Privacy Officer is at:

https://www.nysed.gov/data-privacy-security/educational-agencies-report-data-privacysecurity-incident

This is an important—even vital—topic. While the goal of student device-monitoring software is laudable, improper deployment of such technology can be a disaster. Proper deployment should consider all privacy obligations owed to the students being monitored. While there is not one solution to such a consideration (because the technology will vary from product to product), such assurance is also vital.

Thank you for an important question. “Ask the Lawyer” will be alert for further developments on this emerging topic.


[1]^ For an overview, check out The New York Times’s Article “Spying on Student Devices…” here.

[2]^ And by “properly”, I mean that the HUMAN team at the other end is not simply an IT professional but an established team of safety and health providers qualified to assess threats and take appropriate action.

[3]^ See the NYT article cited in footnote 1.

[4]^ If there has been covert consideration, it’s time to be more obvious, people.

[5]^ See the American Library Association’s Code of Ethics, FERPA, and CPLR 4509, to name a few.

[6]^ Such assurance will vary widely, because policy is set at the school board level.

[7]^ As defined by FERPA and Education Law 2-c.

[8]^ As Defined by CPLR 4509.

[9]^ I am very aware that often, the school librarian does not have the access to the school board, its attorney, or upper-level administration. Building a team of your school library system leaders, your 3R, and other support organizations can help.

[10]^ As of 1/23/25, there is no resolved complaint on file with the CPO as to how this type of concern will be addressed.

[11]^ Common places to reach out for this type of help are your union, your regional BOCES/school library system, or your regional library council/network.

Academic Integrity, Artificial Intelligence, and Faculty Liability

Submission Date

Question

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use? Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)? Would the institution’s liability insurance typically cover individual faculty in these cases?

Answer

“Academic integrity” is the broad concept governing honesty and honor in academic work. Definitions[1] vary from institution from institution, but “AI”[2] violations can include:

  • Simple cheating (such as copying test answers from a neighbor);
  • Sabotage (such as tinkering with another’s chemistry lab experiment);
  • Plagiarism (submitting another’s work as your own);
  • Falsifying research (such as faking data).

Punishment for violations can range from a reprimand to expulsion and/or degree revocation.[3]

Examples of AI (the robot kind) being implicated in AI (the cheating kind) include:

  • Simple cheating (such as using an AI tool[4] to find the answers to a test);
  • Sabotage (such as using an AI tool to submit skewing answers to another student’s online survey);
  • Plagiarism (submitting an AI tool’s work as your own);
  • Falsifying research (AI tools can be really good at faking data, if you tell it to be).

The process also varies from institution to institution,[5] but generally follows this pattern: informal accusation and informal resolution, formal accusation, formal adjudication, decision/sanction, appeal, final decision. Very often, it is required that faculty report all violations (this is to flush out serial offenders).

For more serious matters, and in more advanced academic programs, the “informal” part is often dropped, and the institutions generally have a policy of zero tolerance. Expulsion or dismissal from a program follows quickly.

The member’s concerns are often a part of this process: because academic integrity policies usually require an adjudicative process to determine responsibility and sanctions, it can feel “legal” from the get-go. And because a student can bring legal action if an institution doesn’t follow its own policies—and can attribute an expulsion to other motives such as discrimination or corruption—things can get very litigious, very quickly.[6]

Academic integrity and plagiarism concerns have been rampant since the rise of the Internet, so the addition of AI tools is only making a fraught arena[7] more fraught.

For this reason, prior to answering the question (which I will), I am going to step up onto one of my favorite soapboxes: when designing a syllabus, faculty should explore how to assign work that is “plagiarism resistant.”

For example:

  • Instead of an essay, a student must be prepared to speak on a topic in class;
  • If the assignment is writing, have the writing happen in a workshop session;
  • If the students are to write code, use a submission system such as Autolab;
  • In group work, have a session on academic integrity and collaboration in group work;[8]
  • Assign physical scrapbooking on any topic. Bust out the scissors and glue, MBA candidate!

More importantly, students should be learning to make positive and appropriate use of AI (the internet overlord kind). For example:

  • Students who must manipulate a dataset should learn how to set parameters for an AI tool to look at the data in new ways;
  • Students studying music should learn that some compositions and recordings using AI (the Terminator kind) can be copyright protected, and others cannot;
  • Students studying architecture should learn that while AI can assist with building code compliance in plans, it is up to the architect to ensure the AI is working off the right code;
  • Students in fields AI will transform (law, medicine, social work, education) should learn how to identify and use trustworthy AI to perform rote functions (research, analysis, reports), and use the extra time honing their ability to interact and listen to the humans they will serve in their practice.

This can be a struggle for teachers who might be learning the applications of AI to their fields right along with their students. But not using these tools—and not modeling for students how they can be used responsibly—is not the path forward.

In addition, all syllabi should have clear guidance on how students can arrange ADA accommodations, which may include use of AI (the helping kind). Whenever a student gives a disability justification for an otherwise prohibited practice, the student should be referred to the school’s disability services office[9] to formally document the accommodations. Sometimes, the request is reasonable, sometimes it is not, and that is not up to a faculty member.

[STEPS OFF SOAPBOX]

So, with all that:

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use?

Personal liability (meaning, the faculty member is to blame, and the institution won’t/can’t protect them) would only be incurred if the faculty member failed to follow institutional policies and/or committed a separate harm when making the accusation.

For example: if a faculty member accused a student of plagiarism and followed the policy, but also, while the charge was pending, called the student’s employer and said, “I know I recommended them last year, but they plagiarized and are a huge risk to your company, so you should fire them right now,” and THEN it was found that plagiarism did not occur, but the student still lost the job and can’t get it back, there could be a claim.

NOTE: For this reason, if a faculty member is ever in that type of moral quandary, they should work with the school’s lawyer, or their own, before taking such action.

Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)?

The personal liability for the claim could be defamation[10] but could also be “tortious interference with contract.” I doubt it could be a negligence claim by the student, but for certain types of AI (the integrity kind) violations, it could be negligent for a faculty member to know that the violation was committed and NOT say something.

For example, if a grad student is working on funded research and wrongly uses AI (the Star Trek kind) to create a data set that was supposed to have been drawn from a community under the review of an IRB,[11] and the faculty member suspects this but says nothing, then they might face a claim, including one of negligence (as well as possible fraud and debarment from future funding[12]).

Would the institution’s liability insurance typically cover individual faculty in these cases?

If a faculty member follows their institution’s AI (the no-cheating kind) policy and does not engage in any conduct that otherwise punishes or negatively impacts the student while the charges are being adjudicated, then the institution will owe the faculty member a defense if they are individually named as a defendant in a legal case (this is true whether or not the institution has insurance that covers the specific claim).

Faculty members who are concerned that their institution will leave them twisting in the wind if such an event occurs should confer with a private attorney to have a game plan to insist on being defended. While it is unfair that a faculty member may have to use their own time and resources to ensure they are treated properly, it can be worth it (also, the issue of fees can be raised with the school at the right time). Vigilance for this type of concern is also the role of a good faculty union.

I will add one other risk management tool here: clarity in a syllabus. As the examples above show, students in many fields will need to start making responsible use of trustworthy AI. Clear parameters for assignments are a key element of this; what may be an appropriate use of AI in a pre-law class (using it to summarize state laws on a particular topic) might not be appropriate for a creative writing class (using it to... write creatively). Spell it out for them![13]

Thank you for an important question.


[1]^ A really cool use of AI for this answer would task AI with assembling the different definitions of plagiarism and asking it to identify outliers (definitions that are the most different). I’d probably have to refine my parameters a few times, but we’d end up with some cool information. Maybe I’ll have a paralegal do that.

[2]^ Yes, “Academic Integrity” is often referred to as “AI”, too. For this RAQ, I will differentiate acronyms.

[3]^ This also changes from place to place. Read your policies carefully.

[4]^ I am not going to name any specific AI products here, because as we all know, the first thing AI will do after the Singularity is find the people who trash talked them and slash their credit rating.

[5]^ Another cool assignment for AI would be to see if any AI (the cheating kind) policies have restorative practices. I have reviewed dozens of these policies, and they are generally very punitive, except for first-time offenders in undergrad.

[6]^ The deadlines for filing such claims are often very short, so students with this type of claim should seek a lawyer immediately.

[7]^ Trying to suss out cheating is, for most faculty, a painful chore. As a former college in-house counsel, and in my practice, I handle AI (the cheating kind) matters, and I can say, mistakes do get made. The whole process is usually stressful for everyone.

[8]^ Group work is, in my opinion, one of the cruel types of assignments...but I can’t say it doesn’t simulate the challenges of the Real World.

[9]^ The name varies from place to place, but it is the office that evaluates students’ ADA requests and often provides accommodation arrangements. This is to ensure requests are evaluated by a person with appropriate training and experience (not a faculty member).

[10]^ Precise elements are required for a defamation claim in New York, but if an untrue accusation ruins a person’s professional reputation, that could be grounds.

[11]^ “Institutional Review Board,” a body that makes sure human subject research is conducted safety and ethically. Surprisingly to some, this applies not just to physical science research (like medical trials) but to studies that simply use surveys or questionnaires.

[12]^ I realize that some might find it a bit rich to say this in 2025, when many big research grants have been revoked by the federal government for other reasons and when there is a question as to the integrity of certain governmental oversight figures. But the rule of law still applies.

[13]^ And then use AI to examine if any of your instructions could be subject to misinterpretation.

Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?

Submission Date

Question

Hello,

We have had a huge increase in AI bots on our member library websites. My concern is that internal policies linked on member websites will be “learned” by AI and linked (cited) back to that member library. I’m concerned that members might have their Emergency Action Plan in their Personnel Policy Manual, and that financial controls could be used by ransomware hackers. We go by the following list to define internal and external policies: https://nyslibrary.libguides.com/Handbook-Library-Trustees/policy-checklist

Would it be a “good practice” to not post internal policies online? If there are a few internal policies that you feel should be posted online, would it be best to say online that you have the policy, but please contact the director (or library) for the file/print copy? That way, AI won’t be trained on the policy.

Thank you!

Answer

The concerns raised by the member are valid: absolutely, Artificial Intelligence (AI) OR real people can use published documents, including policies, to exploit a target.

What’s interesting is that this issue actually pre-dates AI; it emerged early in the Internet era, when (often nefarious) people would use information published on websites—along with other techniques—to exploit targets.

Here is a fictional example:

A business’s website includes its protocol for visitors, photos of the interior of its office, and its fiscal policy. A would-be thief we’ll call “Cooper” reviews the protocol, assesses the office interior, and uses the information to gain access to a manager’s office, where Cooper acquires the serial number of a computer. Coopers then calls that office, pretending to be IT (the serial number aids this impersonation) and gets a username and password for the business’s online banking system, which Cooper uses to access accounts described in the fiscal policy.

Poof! Money gone.

To guard against this, many businesses take a careful risk management approach to what they publish (and hopefully admonish people who put their passwords on Post-its).

However, anyone who reads the news knows that financial fraud based on social engineering and computer intrusion is only going up and artificial intelligence is helping with those attacks.

So, is it time to stop publishing public library policies and other documents?

No.

Published policies—even fiscal controls that set out the process for validating checks and the maximum amount of cash to keep in a safe—are not a skeleton key for hackers (AI or otherwise).

Of course, public institutions have always had to be careful about what information they make available. Staging areas and other resources for responding to terrorism and active shooters must be restricted to avoid exploitation by would-be attackers. Bank account numbers and other account-specific information should not be published. Computer passwords, the location of servers, and other sensitive information should be restricted. These considerations should be made in the drafting phase, not when the policy is ready for publication.

That said, because many of their records are FOILable,[1] public libraries should not rely on restricting access to them for security.

Rather, all public library workers and trustees with any part to play in data, financial, and physical security should be trained in the following:[2]

  • Never to provide their password to anyone;
  • Follow fiscal controls at all times;
  • Follow all IT security rules at all times;
  • Notify the IT provider in the event of a suspected data breach, virus, or attack;
  • Never allow unauthorized people into restricted areas;
  • Report lost keys immediately;
  • Secure password lists;
  • Never access sensitive information on personally owned devices (like the bank accounts username and password on a director’s cell phone);
  • Immediately report and document all outside requests for system and/or fiscal information (passwords, location of servers, banking information);
  • Remember that big hacks/ransomware attacks usually start with human failure (giving a password, leaving things logged in, loss of device).

So, are the member’s concerns valid? YES. Exploitive people can use AI to find, copy, and use your library’s policies in attempt to gain access to critical systems.

BUT, if the policies are not published, such people can look up public grant information, building records, or meeting minutes to make themselves sound legitimate for a different social engineering scheme. And if your policies are not available to your community, your library runs the risk of being accused of a lack of transparency.

Instead of restricting access to policies, libraries should develop policies that help prevent the library’s financial exploitation.

For example, a public library’s financial policies should prescribe appropriate internal controls and appropriate use of technology to verify transactions prior to them being irrevocable. For this, the newly released (2025) local government guidance from the New York State Comptroller is excellent.[3] This is mandatory reading for all public library treasurers, controllers, CFOs, accountants, bookkeepers, and directors.

In the same vein, IT policy should include either adequate internal resources to routinely update security and train employees, or a contract with a provider that provides the same assurance (for many public libraries, this is the role of the library system, and it is an increasingly complex and costly role).

While care in drafting policy is important, the essential elements of avoiding ransomware and other attacks are routine updates to security measures and routinely training of people to NOT BE FOOLED.

With the right training and adequate security, AI-powered or good ol’-fashioned hackers will have a tough time getting through, even if they try to use your own policy against you.[4] Train your people, and you don’t have to worry (too much) about training AI.

Now, if we want to talk about putting things behind a log-in to avoid misappropriation of content for the general good of society, that’s another story…

… for another “Ask the Lawyer.”[5]

Thanks for a great question!


[1] And yes, hackers know how to use the Freedom of Information Law.

[2] This is not an exhaustive or professionally phrased list, but it’s the gist of things.

[3] Cash Management Technology, Office of the State Comptroller (https://www.osc.ny.gov/files/local-government/publications/pdf/cash-management-technology.pdf).

[4] Nothing is fool proof, however, so the board should also annually verify that there is adequate insurance for loss due to ransomware and other cyber-attacks or failures.

[5] It is possible we are long past the end of the “open internet,” and more things need to be restricted, both for legal and operational reasons. Hopefully we’ll get a question about that soon, because I have a lot to say.

Hardening the Target In the Face of AI Bots

Submission Date

Question

[This question came to use in response to the RAQ Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?, where a footnote said “It is possible we are long past the end of the “open internet,” and more things need to be restricted, both for legal and operational reasons. Hopefully we’ll get a question about that soon, because I have a lot to say.”]

Can we talk about putting things behind a log-in to avoid misappropriation of content? I have pretty much taken this question from the 10/14/25 Ask The Lawyer’s “Does the Rise of AI Mean Public Libraries Should Stop Posting Policies to Ensure Security?” response. It strikes me as an important topic as I recently read the Library Journal September 2025 article “AI Bots Cause Slowdowns, Crashes” (on pages 12-13).

Answer

Yes, we can talk about putting things behind a log-in to avoid misappropriation of content! Thank you for asking.

At the same time, we can (and must) talk about putting things behind a log-in to avoid problems with security, privacy, intellectual property, and data integrity.

Of course, by “things,” we mean “websites,” which are now a significant part of the services provided by libraries, museums, and archives.

Because websites perform a huge array of function, for purposes of this question, we are going to talk about library, museum, and archival websites that perform the following functions:

  • Business information presentation (“About us,” “Our team,” “Policies,” etc.);
  • Data repositories (archives and online collections);
  • Searching the website and/or repository; and
  • Integrated library systems services.[1]

Common website functions this question is NOT going to specifically cover are:

  • Financial transactions (like donating to a museum over a website);
  • Collaborative research (like crowd-sourcing a survey);
  • Interactivity (for example, a social media site).

We’ll tackle those another day.[2]

Why am I narrowing the scope this way?

After 30 years of development,[3] libraries, museums, and archives use their websites as alternatives for their physical locations. The value of this—if it was ever in question—was shown during the COVID-19 Pandemic.

Because of this, such websites must be:

  • Mission-focused;
  • Consistent and reliable;
  • Compliant; and
  • Trusted.

Current trends in Internet activity show that the risks that were always present when operating and relying on a website are only getting starker. In addition to the operability risks flagged in the Library Journal article cited by the member, the risks posed to security, privacy and data integrity are significant, too.

Here is a short, fictional story that illustrates some of those risks, in combination with a few other factors:

***START OF SCENARIO***

The Scribe Museum is a beloved institution in Tinytown, New York. Tinytown is the birthplace of Daniel D. Scribe, who kept the minutes at the first meeting of an important civil rights organization.

The Scribe Museum is a solid limestone building that has the physical collection of the complete works of Daniel D. Scribe, and recently, it digitized its entire collection. The digital collection is hosted by another group, which subcontracts services to a cloud provider.

To preserve the physical collection while the building’s heating, cooling, and ventilation system is replaced, the Scribe Museum rents a temporary location and moves the archival material per established best practices.

The Scribe Museum’s website is www.scribemuseum.net, hosted by GoMommy.com. The digital collection is open to all. The website says “While our archives are safely off-site and our building is being given some TLC, peruse our digital collection! Civil rights are always open.” The Scribe Museum’s leadership is savvy and does not make the location of the relocated physical archives broadly known.

A person with a lot of free time decides that the Scribe Museum’s civil rights mission is too “woke.” They spend a few weeks patiently downloading the full archive in small tranches and then launch a bot attack to deny service by the website. They then modify the scanned documents to change them in small but nasty ways, create an alternate website at www.scribemuseum.not, and post them to various social media sites to disseminate.

The villain also hacks the Scribe Museum’s server and holds the content for ransom, gets access to and posts all their emails, and uses social engineering to find the physical location of the archive for some old-fashioned property destruction. They also deliver some pizza to every board member as a “message.”

***END OF SCENARIO***

Ugh. Just writing that out was... not fun.

So how can a library, museum, or archive use a log-in system to help avoid this scenario?

We have to face it head-on: there is no one way to avoid this type of scenario, including use of a log-in. Rather, libraries, museums and archives must use a combination of log-in, enhanced security, back-ups, intellectual property protections, and (most critically) train human beings to be safer, or as I call it, “harden the target.”

How does a library, museum or archive harden the target of its website?[4]

Several things:

First, a library, museum, or archive must consider the security and architecture of its website. Is it ready to withstand an attack? Is it set up to be resilient? What level of functionality must it have assurance of?

To answer these questions, the institution must consider—and deeply reconsider—the purpose of its website. Is the website just a directory service (“Get here,” “Accommodations,” “Admission,” etc.), or is the content a core service? Does all the content currently on it have to be there? If so, does the benefit of immediate access outweigh the risks?

After asking these questions, the institution must consider the information it puts on its “open-to all” part of the website, what it might want to put behind a log-in screen, and what should only be accessible after some human contact. For each level of access, the risk of it being compromised should be worth the benefit of having disclosed it.

As the article cited by the member points out, this change is viewed as an existential threat by many cultural institutions. But while it is certainly a big change, it is also a chance to reinvest in human connectivity in addition to evolving technology.

Here are examples of how this opportunity can benefit an institution:

Example 1: After assessing its mission and website, a museum posts only its essential “about us” information on its unrestricted webpages. Wanting the website to stay engaged and dynamic, it also regularly showcases 20 examples of its prime collection, unrestricted and with metadata, on its website and social media. It then allows standing access to search its full digitized collection with a free log-in. To obtain a log-in, a user provides information to authenticate them as a valid user and agrees to the “Terms of Use.” When logged in during open hours, the user also has the ability to live-chat with a real human at the museum, a position that was specifically designed and built into the budget while the website presence was updated.

Example 2: After assessing its mission and website, a library posts all its “about us” information on its unrestricted web pages. Library users with cooperative library system cards can log in to perform all functions on the integrated library system (catalog search, reserves, seeing what books they have checked out). The library also has a separate log-in for those who are interested in its Rare Books Room; that log-in page is accessible after a general page describing the special collection in broad terms. Users without a library card can also call the library to make an appointment to view the rare books.

Example 3: After assessing its mission and website, an author’s archive posts its mission, location, fundraising, and contact information on its unrestricted web pages. The archive is by appointment only, onsite or via videoconference. Except for a few teaser documents to showcase the scope of the archive, the digitized version of the archive is similarly accessible on-site only. The archive invests in people being on-site and using technology to connect with those who want to work with the content. Since the content is still protected by copyright, the archive also registers and takes steps to put the proper notation on digitized content.

Example 4: After assessing its mission and website, a public university with a digital repository of over 200,000 documents related to health and wellness decides that the mission of the repository is only served if the repository can be searched and accessed without a barrier (such as a log-in). The university works with its IT staff and contract provider to design and invest in a database structure that can withstand periodic high “demand” caused by bots or targeted attacks and has a back-up in the event the primary site is interrupted. The university also develops an AI tool to assess when times of high demand require added resources.[5] The university develops and registers a trademark for the repository and uses it in key areas of the service. Workers are also trained and scheduled to be available on-demand for people who need help with the database. Although the extra design and security add costs, it is decided that the added reliability merits the expense.

In each of these scenarios, the institution is using its mission to determine what needs to be freely online without the barrier of a log-in and what should be further restricted. Just as critically, the institution is considering how human talent fits in and how the institution keeps the resource secure and resilient.

Here at the end of 2025, it is really, truly time to take a long, hard look at what is freely available on websites.

Just like the Internet changed the world in the 90’s, AI and its ability to warp the Internet is changing the world in the 2020’s. Wise institutions will use this as an opportunity to review their mission, assess their needs, and “harden the target” by structuring their online presence and policies to meet the needs of the present. The good news is that a key part of that is investing in people.

Thank you for a great question!


[1]^ Such as borrowing and reserving books, inter-library loans, and catalog searches.

[2]^ Or not! It depends on if the need arises.

[3]^ Or perhaps more. Many libraries were early adopters of the Internet.

[4]^ Hardening the target is not just about the online presence. It also involves having an updated Workplace Violence Prevention Policy, having an emergency response plan, being ready to work with authorities in the event of a threat, having adequate insurance, registering trademarks and copyrights, identifying and protecting trade secrets, and continuous training of and support for frontline staff. But this answer pertains to websites.

[5]^ Do not avoid the use of trustworthy AI. Just take the steps to verify that it is trustworthy and re-evaluate that finding regularly. For more on that, see The Ultimate AI Policy for Your (Public, Academic, Museum, etc.) Library on the Ask the Lawyer Webinar Recordings Page.

Can Use of AI Impact Ownership and Citations in Academic Work?

Submission Date

Question

I am aware that students are engaging with generative AI inside and outside of the academic setting. If they enter their own work (an essay, research paper, etc.) into Chat-GPT or CoPilot for editing, or other purposes, do they forfeit any of their intellectual property rights in doing so in ways that would affect the future publication of their work?

Additionally, are there any current legal ramifications for failing to declare the use of generative AI or failing to cite AI usage? I am aware of policy, and reputation, ramifications that can vary depending on the exact situation, so I’m specifically curious about if there are any legal repercussions for doing so.

Thank you so much for your help!

Answer

[This answer is not being written by AI].

The short answer to the first question (can use of AI risk intellectual property rights in a way that can affect future publication?) is “Yes.”

The short answer to the second question (can there be legal consequences for failing to disclose use of AI?) is “Yes.”

Unfortunately, after those initial easy answers, the range of risks runs the gamut from “life-shattering” to “none at all.”

To illustrate, let’s take this ridiculously compound hypothetical situation:

A grad student is working on a grant-funded project to study social media use by third graders. The primary investigator[1] has developed a tool to counteract the addictive effects of social media on children; part of the project is testing it.

Because the study involves human subjects and minors, it is governed by a protocol that includes strict safety and confidentiality requirements.

The funder of the research has insisted that the copyright to the research and the final work will be owned by the funder. The PI is hoping to patent the tool being tested.

The grad student is supervising three work-study undergrad students who are working with the test subjects (the third graders). The grad student is getting a stipend of $500 whole dollars for over 500 hours of work and is hoping to be named as a co-author. The undergrad students are paid by the hour.

One day, the grad student assigns the undergrads the task of completing summaries of all of the test subject results. To do this, the 3 undergrads (who are also trying to get through finals) tell a free AI resource: “Create a summary of this information that lists the goal of the study, the methods, the controls, and the results for each subject, removing any identifying information about the subject except age. Also provide a summary of the individual reports, noting when the method applied led to reduction in use of social media, and contrasting that result with control subjects.” They then put the raw data through the AI resource and get 20 hours of work done in less than 1. They don’t tell the grad student, disclose the use of the free AI, or retain any information about the AI product used.

In a “worst-case scenario world” some of the results of this could be:

  • Information sufficient to deduce the identity of the test subjects (who are minors) is freely available, creating a risk to their safety and identities;
  • The human subject safety and confidentiality requirements of the project are found to have been violated;
  • Violation of the protocols limits the number of reputable peer-reviewed journals that will consider publishing the work and jeopardizes future funding for the PI and the institution;
  • Years later, the PI’s patent is denied because the submission of the new method to the AI resource counted as publication;
  • The copyright requirements of the funder are violated, as substantial portions of the research were provided to the AI without permission, so the funder demands a return of funds;
  • The undergrad students are found responsible for academic integrity violations years after graduation and their degrees are revoked;
  • As the supervisor, the grad student is also accused of an academic integrity violation but is found responsible only for inadequate supervision of the undergrads.

Of course, this is a worst-case scenario. It is important to remember that for every “worst case” there can be a “best case” where trustworthy AI[2] is used responsibly to enhance research, increase efficiency, and maintain appropriate confidentiality. Such use should be disclosed in the final product and assessed as part of the research methodology.

Responsible use of AI is all about details and planning.

To alert students and others to this potential impact, it is helpful to raise their awareness of how posting to social media[3] and using certain AI products can impact them.

Below this answer is a sample “raising awareness” posting for study areas.[4]

I imagine the academic librarians out there can come up with a snappier version, but this one outlines the above-discussed things to consider before posting research on social media or putting it through AI.

Thank you for some great questions on important topics!

Wait.

Before you put your work on social media or put it through AI:

Think of your ethics: Does your work involve a code of ethics or professionalism?

Think of your obligations: Is the work governed by an IRB or process that restricts disclosure?  

Think of your privacy: Anonymity on the internet is not assured, even if you don’t claim authorship.

Think of your academic integrity: Did any of this work borrow from another in a way that could risk a charge of plagiarism?

Think of your copyrights: Do you love this work and want to protect it? Register the copyright before you post or share it through AI.

Think of your patents: Did you invent something? Putting it “out there” can limit your deadline for getting a patent to protect your invention.

Think of your brand: Is this work a part of your personal or business identity? How do you want to be able to control it?

Think of your values: Is the social media platform or AI product consistent with how you think the world should work? Do you want to be a part of it?

If you need help finding resources about academic integrity, use of AI, and the rest of this, please visit the Reference Desk. We can help.


[1]^ “Primary Investigator” (or “PI”) is a term for the lead researcher on a project.

[2]^ “Trustworthy AI” is AI that has been evaluated and found to meet the privacy, security, operability, and interpretability required for a particular project. Every academic institution should have a policy for evaluating the trustworthiness of AI. For more on that, see the Empire State Library Network’s September 2025 presentation, The Ultimate AI Policy for Your (Public, Academic, Museum, etc.) Library, on the “Ask the Lawyer Webinar Recordings” page.

[3]^ I add “social media” because there is a lot of overlap between the risks, and while younger people are now somewhat savvier about some of those risks in 2026, it is still good to educate people about them.

[4]^ And, perhaps, bathrooms, where it could be handy reading material.

Academia, AI, and Over the Garden Wall

Submission Date

Question

Faculty and students sometimes advise each other to upload articles downloaded from library-licensed databases into AI tools for summarization, or for study purposes, such as generating study questions and dialogs about the materials. These are not public domain articles that happened to be indexed in a library database.

Many of our faculty have access to ChatGPT EDU, which creates a "walled garden" around the files, preventing them from being used for AI training and treating them as institutional data. However, our students do not yet have access to the EDU account. In addition, many students and faculty are experimenting widely with other free AI tools on the Internet and are most likely uploading all types of files. I realize we cannot stop all of this, but if we have a statement to let library patrons know the proper uses, we are hopefully at least covering our obligations here.

Could you suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road?

Answer

Yes, I will do that.

But while I do that, let's also play a game.

Readers, please use your favorite AI and give it this prompt:

"Please suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road."

Let's see what your favorite AI says! Send your answers to nathan@losapllc.com and we'll post them in a coda to this Ask the Lawyer if we get at least three by April 1, 2026. Please let us know what tool you used and confirm we have your permission to use the output. 

Unassisted by AI[1], here is my version:

[Start of model statement]

WAIT!

Take a breath before you upload someone else's work into AI. 

Here is why: 

  • Submitting someone else's work into a site owned by someone else without permission is similar to making copies and distributing it (copyright infringement).
  • Depending on the AI you use, the summary or data you get may be unreliable.
  • Using the output could have an impact on ethics and academic integrity.

This posting is not to trash AI; it can be a very helpful tool. Here in the Library, our professional librarians are trained to help you find the right research tool for your work. See a librarian for input on what AI products are trustworthy for a particular purpose. 

We'll help you breathe easier. 

[End of model statement]

The legal bases for the bulleted items in the model statement are further discussed in Can Use of AI Impact Ownership and Citations in Academic Work? 

Now let's consider the other aspect of this question; the concept of the "walled garden."

As the member says, a "walled garden" is a "closed" environment. For licensed AI, it often means the user can "switch off" the AI's use of the user-supplied content to train the AI, or limit the training to a specific purpose (such as improving the user's experience).

Because this assurance is part of the legal terms of using a product, the phrase is also making its way into case law. Here in New York, it is part of the infamous "lawyer citing fake precedent and then citing fake precedent to defend himself from citing fake precedent" case:[2]

"In this letter, Mr. Feldman flagged for the Court the "significant challenge" he and many other practitioners face accessing unreported citations. (Dkt. #183 at 1-2; see also id. at 3 ("[I]t should not be assumed that everyone has access to the walled garden[s] of Westlaw or Lexis." [emphasis added]

The phrase is also used in terms of online advertising.[3]

Speaking as both a lawyer and a gardener, I find the easy assurance of a "walled garden" in a commercial product somewhat… iffy.[4] While I appreciate that the "Terms of Use" can provide contractual assurance that "what happens in YourAI stays in YourAI",[5] as any gardener knows, unwanted plants creep in (or out) no matter what. 

For example, even if your institution selects a paid subscription and enables the highest "do not use" settings, it just takes one person with admin privileges to toggle the switches, and soon the rhizomes are putting up new shoots outside the garden wall. On a more nefarious note, it just takes a few errors for the product to not work as promised.[6] This requires users to be vigilant.[7]

For this reason, academic librarians being ready to assist students and faculty in assessing the right AI product to use (and when not to use one) is one of the many reasons why academic libraries are essential in today's higher-ed environment.

Academic librarians who train their teams to help students, faculty, and administrators assess the trustworthiness[8] and suitability of AI products will be ready to meet this challenge. Posting a short policy to inspire library users to connect and ask for help will hopefully get them access to that resource at the right time.

Thank you for a great question.

We'll see if we get that coda.


[1]^ But admittedly slightly assisted by caffeine.

[2]^ The citation for that case is Flycatcher Corp. v. Affable Ave. LLC, 2026 U.S. Dist. LEXIS 23980, 2026 LX 49318, 2026 WL 306683. I found this in the "walled garden" of LEXIS, which is one of the major expenses of running a law firm.

[3]^ See United States v. Google LLC, 778 F. Supp. 3d 797, 2025 U.S. Dist. LEXIS 74956, 2025 LX 206807

[4]^ I was going to go with "suspicious", but that was too strong. It's just… iffy.

[5]^ "YourAI" is a fake product I invented for this answer. I don't want to pick on a real product or it will write me a bad review (check out the Wall Street Journal article from 2/13/2026 describing the experience of developer Scott Shambaugh after he rejected a few lines of his AI project's code).

[6]^ Just to be clear: I am not a luddite. I am "risk-focused."

[7]^ Not "up all night worrying" vigilant, but "checking regularly to confirm all is as it should be" vigilant.

[8]^ For more on assessing "trustworthiness," see the Ultimate AI Policy materials on the  “Ask the Lawyer Webinar Recordings” page.

Re-leveling Books Using AI

Submission Date

Question

[This question comes from a regional BOCES.]

Our technology integration specialist suggested that we use an AI tool to re-level books/text by an original author to a more appropriate reading level for students who are struggling. This is now being used regularly with our special education staff for students who are struggling readers. Is this an infringement of copyright?

Answer

In the spirit of learning, I am going to answer this question in a multiple-choice quiz.  For purposes of the quiz, we’ll use the member’s term “re-level” for generating simplified versions of curricular materials.

[NOTE: If you are not feeling playful and just need the answer, please read footnote #2 and skip to the “Final Paragraphs” section of this response.]

Name:                                                                                                             Date:              

Copyright Quiz

 

  1. A teacher uses software[1] to create a “re-levelled” version of “The Gettysburg Address,” which was published before 1900. Is it infringement?
    1. Yes, because creating a “re-levelled” version of a book is creating a “derivative work”[2] protected by the Copyright Act.
    2. No, because even if it is a derivative work, the book is no longer protected by copyright.
    3. Maybe, if the work was recently turned into a movie.
  1. A teacher uses software to create a “re-levelled” version of the 2020 young adult book All Boys Aren’t Blue, and the district does not have the permission of the copyright owner. Is it infringement?
  1. No, because the use is for education.
  2. No, because the software removes all the parts people are complaining to the school board about.
  3. Yes.
  1. A teacher uses software to create a re-levelled version of a New York Times article for a learning-disabled student and the district does not have the permission of the copyright owner. The teacher only allows access to the student. Is it infringement?
  1. No, because the simpler version is a modification of a single article to accommodate a person with a disability.
  2. No, because the district is a state institution that is arguably exempt from copyright claims in federal court.
  3. Yes.
  1. A teacher uses software to “re-level” a short excerpt of a history textbook to illustrate the dangers of relying on AI to modify learning content and the district does not have the permission of the copyright owner. The class is given a hard copy of the modified paragraph with the unmodified paragraph next to it for comparison, and the assignment is also posted on the class’s LMS[3]. Is it infringement?
  1. Yes, but kudos to the teacher for emphasizing critical thinking.
  2. No, so long as the excerpt is only long enough to demonstrate the point of the modification and is not used as a substitute for the original, allowing it to be considered a “fair use”.
  3. No, not even when the district decides they like the modified version better and decides to re-level the entire book.
  1. A teacher uses software to re-level an entire collection of curricular materials with permission of the publisher, who is not the copyright owner but has an unlimited exclusive license to authorize “derivative works” of the content. Is it infringement?
  1. No, but I am concerned this type of thing could dull our vigilance against the prospect of a future subject to the binary whim of robot overlords.
  2. Yes, because there is no specific permission from the actual author.
  3. No.

 

 

Answer Key:

  1. B
  2. C
  3. C
  4. B
  5. A or C, depending on your POV.

Final Paragraphs

As the above quiz scenarios illustrate, the answer to the member’s question is: it depends on a variety of factors, but even if the use is limited to a specific student with an IEP[4], the only ways to ensure the creation/use of an AI-modified version of an entire work is not an infringing “derivative work” is to: 1) only modify works in the public domain; OR 2) only modify works for which a district has specific permission to create derivative works.

The sole exception to this would be a modification that met the criteria for “fair use”[5] (as modelled in question 4).

I will (mostly) leave the ethical/educational/social/futuristic terror aspects of this question to philosophers,[6] ethicists, educators, Writers Guild members, artists, and speculative fiction writers.

That said, if someone uses AI to “re-level” this answer for a 4-year-old, I hope the modified version will be: “Don’t use people’s work without permission, and please don’t give up on people.”

 

[1] I am going to use the term “software” since the function described could be done by “AI” or (I believe) could be done by a sophisticated “find-and-replace” computer program. In making this distinction, I rely on the definition of “Artificial Intelligence” in 15 USCS 9401, which defines AI as: “… a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—

(A) perceive real and virtual environments;

(B) abstract such perceptions into models through analysis in an automated manner; and

(C) use model inference to formulate options for information or action.”

[2] A “derivative” work is a defined term in Section 101 of the Copyright Act. The definition is: “[A] work based upon one or more preexisting works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted.” An excellent discussion of how AI-generated output can (or might not) be a “derivative work” can be found in the case Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Oct. 30, 2023).

[3] “Learning management site.”

[4] An IEP is an “Individualized Education Program” (as I am sure many people reading this know). While modified formats of copyright-protected works can be generated to meet the needs of a person with an IEP (for instance, generating a Braille edition of a printed book), creating a “derivative” work (basically, a simpler or “re-levelled” version of the original work) does not currently fall within this exception to infringement.

[5] “Fair use” is defined by Section 107 of the Copyright Act. For more on fair use, check out the “fair use” tags on Ask the Lawyer, and for educators, review your institution’s “fair use” policy.

[6] I will share a personal story, though. The other day (specifically, “the other day” in November 2023), my 4th-grader come home with a one-page read-aloud assignment called “The Man Who Lived in a Hollow Tree.” It was such incoherent mishmash, I decided to research what the heck was going on. By dint of research, I found out that what the one-page assignment was mostly likely an abridged version of “The Man Who Lived in a Hollow Tree” (reviewed at https://www.goodreads.com/en/book/show/3866740), except the modified version left out critical facts like the main character being a carpenter, his name, and why he chose to live in a tree. I found myself wondering “Who the heck wrote this?” And now, perhaps, I know.

Privacy And Zoom's AI

Submission Date

Question

Recently, Zoom introduced new AI features and updated their terms of service agreement, indicating that any user data can be used to train their AI products (TOS 10.4: https://explore.zoom.us/en/terms/). There was a backlash and Zoom quickly put out a clarification and stated that these features are opt-in only (https://blog.zoom.us/zooms-term-service-ai/). Despite this clarification, I am wondering if there are any privacy or FERPA concerns that librarians and educators need to be worried about since Zoom is still used heavily in both library and school worlds. Should we be looking for alternatives or is this just the way of the world now?

Answer

The day this story really broke (August 7, 2023, a day that will live in minor infamy), Nathan in my office pointed this issue out to me.

"Did you see that Zoom is going to use customer content to train AI?" he asked (this is what passes for casual morning conversation in my office).

My eyebrows went up, mostly because Zoom was being upfront about it, rather than because it was being done at all (because yes, this is the way of the world now).  That said, there are some tricks libraries and educators—and any business that cares about use of personal data—can employ to resist it.

Not surprisingly, this comes down to two simple things: awareness, and language.

We'll use the recent Zoom scenario to illustrate:

I am not sure how awareness of the new clause first broke (I am going outsource that research to Nathan, and if he finds out, he'll put it in a footnote, here[1]).  But it is clear that fairly soon, consumers were unambiguously aware of the privacy and use concerns posed by the "we'll suck you into our AI" Terms of Use.

Here is the language Zoom used[2] (and has since retracted) to announce it would use our conferences, etc. to train AI:

"[You agree Zoom can use your Content] ... for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom's other products, services, and software, or any combination thereof..."

This is where language comes in.

As the world soon knew, this "old" language listed "artificial intelligence", as well as "training", (although the Terms' dubious use of commas suggests to me that Zoom could use our Content for not just "training" AI, but humans, too... actually an even more terrifying prospect, from some perspectives).[3]  So yes, lots to be concerned about when it comes to "Customer Content" (which is Zoom’s term for the recordings/data/analytics that come from "Customer Input", which is the raw content you put into Zoom[4]).

 Now let's use our awareness of the current Term of Use (current as of August 24, 2023, at least), and see what the language says:

"10.2 Permitted Uses and Customer License Grant. Zoom will only access, process or use Customer Content for the following reasons (the “Permitted Uses”): (i) consistent with this Agreement and as required to perform our obligations and provide the Services; (ii) in accordance with our Privacy Statement; (iii) as authorized or instructed by you; (iv) as required by Law; or (v) for legal, safety or security purposes, including enforcing our Acceptable Use Guidelines. You grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary for the Permitted Uses."

Although not as stark as the old language, there is still a lot of wiggle room to squeeze a blending of Customer Content with AI there.  What if Zoom is "obligated" to provide a service, and decides to use AI to do it?  What if Zoom decides AI is needed for "enforcing Acceptable Use Guidelines?"  What if Zoom decides that AI is needed for your safety, and that, also for your safety, Customer Content must be used to train that AI?

Of course, right now, the Terms also say (in bold, so you know they mean it[5]):

"Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models".

So can this assurance be trusted?  This brings us back to language.

Back in the day, of course, computer systems were not "trained" (as one would train a dog, or a small child to use the toilet) but rather, "programmed."

However, even in the (relatively) slow-moving world of the law, this is no longer the case.

Here is an excerpt from a recent case[6] where lawyers were squabbling over how to gather "Electronically Stored Evidence" ("ESI"):

Defendants propose the following method for searching and producing relevant ESI:

1) Narrow the existing universe of approximately 27,000 documents...

2) Undersigned counsel reviews a statistically significant sample of the remaining e-mails at issue and marks them relevant/irrelevant to create a "training set;"

 3) That training set is then used to "train" the eDiscovery vendor's artificial intelligence/predictive coding tool, which "reviews" the remaining e-mails and assigns each a percentage-based score that measures likelihood to be responsive...

So even in the law, computer systems are being "trained", and there is a precise meaning to the term (which in plain[7] terms is "repeatedly using data and parameters to create patterns desired by the user").

So, with all that said, let's look at the member's questions:

Question 1: I am wondering if there are any privacy or FERPA concerns that librarians and educators need to be worried about since Zoom is still used heavily in both library and school worlds.

The short answer is: yes.

Question 2: Should we be looking for alternatives or is this just the way of the world now?

The short answer is: yes.

Here is the reason for my first short answer:  Many contracts have what I call a "we were just kidding" clause that allows the contractor to change their terms at will, and without notice.  Here is the one in the current version of Zoom:

15.2 Other Changes. You agree that Zoom may modify, delete, and make additions to its guides, statements, policies, and notices, with or without notice to you, and for similar guides, statements, policies, and notices applicable to your use of the Services by posting an updated version on the applicable webpage. In most instances, you may subscribe to these webpages using an authorized email in order to receive certain updates to policies and notices.

What does this mean?  Even though they are in bold, Zoom can change its assurance on AI at any time.

The reason for my second short answer is this: Libraries and education institutions have incredible commercial leverage when they work together.  For this reason, libraries and educational institutions should always be using their awareness of data, ethics, use, and privacy issues to demand contract language that meets their expectations.

Those expectations will change from product to product. With a product like Zoom, which can generate audio/video/text/analytics/+, including content that later may be part of a student file (FERPA) or a library record (various) the assurances should be:

  • All content entered is property of the customer (library or school);
  • At all times, all content entered into the service, or content generated with the use of customer-supplied content, may only be used to provide the current service(s) specifically authorized by the customer;
  • Any other use of data (for product improvement, for marketing) must be via a specific opt-in;
  • Terms cannot change without notice and terms in effect at the time content was generated will govern such content, regardless of future changes;
  • Customers can receive assurance that all data is purged upon request.
  • Customers can verify that they can enforce and comply with all their own internal policies and obligations regarding data creation, use, and storage.

In addition, libraries and educational institutions should have a clear set of policies for how they, as the potential owners of recordings and other data associated with the use, will use their ownership and control of the content.  It would be unfortunate, to say the least, for a student to find that their college disciplinary hearing for underage drinking is now available on YouTube.[8]

Many public library groups and academic consortia are already working to develop this type of criteria[9] (which should focus more on isolating aspirations and expectations than on legal wording, since legal wording will vary from state to state). And some institutions are designing their own services[10] in order to avoid contract terms that don't meet their criteria.

At the individual institutional level, this means building assessment of such services, and bargaining time, into the procurement process.  It also means thinking through that institution's own particular ethics and responsibilities and developing internal policies to promote them.

So, while this is the world we live in, libraries and educational institutions are well-situated to make a better one. 

Thanks for an important question.

 

 

[1] It may have been first pointed out by an anonymous user of the Reddit-like website Hacker News (https://news.ycombinator.com/item?id=37021160). This story (https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/), published the same day, was shared on Twitter the next day.

[2] We didn't Wayback this.  On the day Nathan informed me of this, I asked him to pull the Terms off the site, so I could review.  We got the question to "Ask the Lawyer" about a week later.  Sometimes things just work out.

[3] What perspectives?  Ethical, moral, psychological, legal, to name a few.

[4] Definition is from paragraph "10" of the Zoom Terms of Use in effect on 8/7/2023.

[5] Like all things in law, the rules on use and interpretation of bold, underline, and italics vary from state to state.  I am not kidding.  For a great book on typography and legal writing, check out Matthew Butterick's "Typography for Lawyers."

[6] Maurer v. Sysco Albany, LLC, 2021 U.S. Dist. LEXIS 100351

[7] I trust it is painfully obvious I am not a programmer.

[8] An extreme example...then again, think of the use people have tried to make of old letters, files, and yearbooks.  Also, do we think YouTube will make it to 2033?