Skip to main content

Ethics

Who Can Access School Library User Records?

Submission Date

Question

We got a question from a school library...

I was wondering about student privacy when substitutes are in the library. When I started here, subs were able to use the circulation desk to check out material. However, since September we have had one substitute who is also a parent looking up their children’s accounts. We also had another issue with a different substitute looking up material to see what students were checking out. When I found this out it made me uncomfortable and I am no longer allowing subs to circulate materials. I have had some pushback from subs about the sudden limitations. I was thinking that the information would be along the same lines as an adult volunteer. However, I did not know if subs had more privileges to access student accounts because they are district employees. I would like something in writing to reference if admin ever asks.

Answer

First things first: whether a school board trustee, superintendent, principal, teacher, substitute, or volunteer, everyone must abide by the requirements of FERPA, Education Law Section 2-D, and CPLR 4509, each of which restrict access to library user records.

FERPA restricts access to education records on a “need to know” basis, even for employees.

Education Law Section 2-D restricts access to confidential student information.

CPLR 4509 restricts access to library user records, including those of minors.

Of course, knowing the law is different than following it. Plus, the scenario presented requires consideration of an additional factor: the substitute is a parent who’s looking at their child’s information.

Under the Education Law and FERPA, a parent has a right to inspect their child’s education record, and that is often interpreted to include their school library records.[1]

But! The right to inspect a record is not the same as using employee access to view a record for personal reasons. Unless district policy says otherwise,[2] a school employee taking advantage of their employee privileges to specifically access their child’s records is inappropriate.

Most people—including many teachers—are unaware of the additional layers of protection for library user records in New York State. Substitute teachers assigned to a school library might be given minimal information, and if the library is using volunteers, there may be even more reason to be cautious.[3] For this reason, a posted sign at the staff computer(s) could help emphasize the law and your library’s policy.

Here is sample language:

Use of this computer is limited to checking out and returning students’ selections and answering student questions. Accessing student library records for personal reasons is prohibited by privacy laws and district policy. Confidentiality of library services is an important part of library ethics and our school library system’s policies. If you have questions about this policy, please see [Media Specialist].

Or, if you want to have a more light-hearted approach:

Thank you for helping out today!

Just a few things we have to say:

Library user privacy

Means there are things you cannot see.

We only use the computer system

To check out items and return them.

If a student makes an inquiry

We handle it confidentially.

Borrowing records, what’s checked out

Can’t be casually talked about.

If you have a question about this list

Please ask the Media Specialist.

So welcome to our library crew!

Service with ethics is what we do.

Whenever possible, discussing policy guidance and signage like this with a supervisor and/or building principal, so they can back you up in the moment, is a wise idea.

In 2026, standing up for privacy and respect for laws governing electronic access to data grows more critical every day. Care on this topic is a sign of professionalism.[4]

Many thanks to the member for a thoughtful and important question.


[1]^ For more information, see Patron Confidentiality in School Libraries.

[2]^ I can’t imagine a district policy allowing this, but I have learned to never say never.

[3]^ For more, see Adult and Student Volunteers in School Libraries.

[4]^ Is standing up for privacy with doggerel poetry a sign of professionalism? I’ll leave that up to you.

Process for Organizational Solidarity Statements

Submission Date

Question

I’m part of a professional library association that is a 501(c)(3), like the American Library Association, Association for College and Research Libraries, and the Society for American Archivists. Occasionally, the group issues statements in solidarity with various groups, for instance, in protest of police violence against Black people and against Anti-Asian violence. Most recently, the organization identified the hostility of the current political climate against diversity, inclusion, equity, and accessibility (values the organization holds as part of its mission) as a national threat. Are there legal or ethical boundaries for issuing such statements that we should be aware of? Would a statement by the president of the organization, not necessarily reflecting the views of the board/organization, for example, be safer for the organization and/or the president? If we speak out in favor of one group, do we have to do so for every group? There have been no statements in solidarity with women or the gender-queer community, for example, in spite of the violence and bias such individuals face.

Answer

This is an area that (literally) tears groups apart.

There is no one right answer to this, but countless examples show that the process for arriving at an answer is just as important as the decision to issue a statement (or not) of support (or opposition).

For a professional association, that process starts with the founding documents of the organization: the charter or certificate with its “purpose,” its bylaws, and any applicable statement of ethics or law relevant to its foundations.

If the purpose, bylaws, or ethics apply directly to an issue, the obligation (or justification) to speak up is plain.

For example, a professional organization of lawyers issuing a statement objecting on ethical grounds to calls to impeach judges based on duly issued judicial decisions can cite the relevant constitutional provisions, law, regulations, and ethics—as well as respect for the rule of law—to justify issuing the statement.

On the other hand, if the purpose, bylaws, or ethics of the organization don’t have a direct correlation, it’s a harder sell.

For example, the same professional organization of lawyers issuing a statement in opposition to contemplated restrictions on TikTok might have to work a tad harder to justify a statement. But a different organization of lawyers—one created to promote advances in technology, communications, and free speech, for instance—might find it easy to justify.

When an immediate connection to the issue at hand is not easily discernible, it is up to leadership to develop an iterative process to discern the preferences of the members. This is cumbersome and, for matters perceived as urgent, can result in leadership being criticized as “too slow,” but it is the only way to draft and issue a statement that the organization can truly support.

This “iterative process” is where things get tough: some members will want immediate and strong statements. Others will disagree, either because they see the issue differently or because they think it is not the place of that particular organization to take a stance. People will argue, and hopefully, they will keep it civil—but even when they do, this is where things can fall apart.

To minimize the chances of that happening, it is essential to develop a good iterative process.

To do that, leaders should carefully solicit and receive input from members on that controversial topic. This can be done by creating a committee or working group to frame the question: to make a statement or not to make statement?

Because it was a cultural touchstone, let’s take the example of issuing a statement regarding the murder of George Floyd.  His hideous murder by law enforcement inspired many organizations to issue immediate statements, even if they were not directly involved in civil rights or law enforcement. 

In the case of George Floyd, this decision was based on some version of the question: If our fundamental purpose doesn’t require us to stand against this horrible murder by people charged with protecting public safety, what are we for? 

For an organization that did not feel an immediate ability or call to issue a statement, the rubric for deciding whether or not to make an immediate statement would be:

  • Does the issue directly touch on a fundamental purpose or foundation of the organization?
  • If yes, is leadership authorized to issue statements without further approval from the governing body or membership?
  • If yes, does the statement to be issued serve the purpose of the organization?

If the answer to any of those questions is either “no” or “unknown,” then further assessment (or drafting) through an iterative process is needed.

Because an ad hoc iterative process can be cumbersome, some organizations have officers, public relations committees, and/or administrative teams who are empowered by policy (adopted by the governing board) to assess and develop timely statements. For organizations that want to be nimble when reacting to public events, if such a structure is not in place, it is time to create one.

The structure created (whether permanent or an ad hoc committee) will ask deeper questions:

  • Even if the issue doesn’t directly touch on a fundamental purpose or foundation of the organization, is there a compelling reason for the organization to speak up?
  • If yes, what is that reason?
  • What, if any, are the risks of speaking up?
  • Do those risks outweigh the reasons?
  • What are the hoped-for results of issuing a statement? How will those results be assessed to inform future decision-making?
  • If all things point to issuing a statement, who should be authorized to issue the statement, and who will address follow-up?        

By exploring these questions, the organization can assess and articulate its position. To ensure transparency and assure members of how the decision was reached, the answer to every question should be documented before the governing body authorizes the issuance of a statement or decides not to issue one.

Here is where the legal aspect comes in.

Because governing boards are the fiduciaries of the organizations they govern, the documentation assembled should show that the decision about issuing a statement—whatever it is—is based on the best interests of the organization.

Is this cumbersome? Yes.

Is it essential? Yes.

Will an organization engage in the most careful process ever and still risk alienating some members? Absolutely. Could a well-documented decision still create reputational damage? Yes. That is the price of admission for addressing serious questions. Not everyone will agree, and not everyone will be happy, but the board will be able to show it did its duty (regardless of impact).

With that said, here are answers to the specific questions:

Are there legal or ethical boundaries for issuing such statements that we should be aware of?

Yes—the founding documents, bylaws, and policies of the organization should be considered, and the decision should be rooted in the best interests of the organization.

Would a statement by the president of the organization, not necessarily reflecting the views of the board/organization, for example, be safer for the organization and/or the president?

No. The leader of an organization—if speaking in that capacity—speaks for the organization. If they are making their own personal statement, it should be characterized as such, which makes it a nullity for purposes of this issue.

If we speak out in favor of one group, do we have to do so for every group?

That depends on the needs of the organization. An association devoted to biology as a profession may want to speak up on every matter that relates to the profession, including groups that may have less access to a path towards that career due to certain quantifiable factors. On the other hand, an association devoted to biology may maintain a policy of only “speaking up” by creating a rigorously peer-reviewed publication,[1] basing its decision on ensuring that the credibility of the organization’s publication can’t be undermined by any other activity. While either decision can be criticized, either is justifiable and appropriate (legally speaking).

The only “wrong” decision here is if an officer, board member, or board “goes rogue” and issues a statement on behalf of the organization without a clear justification or authority. Such action can be the basis for removal, and that is a whole other type of controversy.

Thank you for this series of thoughtful and timely questions.


[1]^ Biology Now! or Biology Today or Biology Forever.

Academic Integrity, Artificial Intelligence, and Faculty Liability

Submission Date

Question

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use? Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)? Would the institution’s liability insurance typically cover individual faculty in these cases?

Answer

“Academic integrity” is the broad concept governing honesty and honor in academic work. Definitions[1] vary from institution from institution, but “AI”[2] violations can include:

  • Simple cheating (such as copying test answers from a neighbor);
  • Sabotage (such as tinkering with another’s chemistry lab experiment);
  • Plagiarism (submitting another’s work as your own);
  • Falsifying research (such as faking data).

Punishment for violations can range from a reprimand to expulsion and/or degree revocation.[3]

Examples of AI (the robot kind) being implicated in AI (the cheating kind) include:

  • Simple cheating (such as using an AI tool[4] to find the answers to a test);
  • Sabotage (such as using an AI tool to submit skewing answers to another student’s online survey);
  • Plagiarism (submitting an AI tool’s work as your own);
  • Falsifying research (AI tools can be really good at faking data, if you tell it to be).

The process also varies from institution to institution,[5] but generally follows this pattern: informal accusation and informal resolution, formal accusation, formal adjudication, decision/sanction, appeal, final decision. Very often, it is required that faculty report all violations (this is to flush out serial offenders).

For more serious matters, and in more advanced academic programs, the “informal” part is often dropped, and the institutions generally have a policy of zero tolerance. Expulsion or dismissal from a program follows quickly.

The member’s concerns are often a part of this process: because academic integrity policies usually require an adjudicative process to determine responsibility and sanctions, it can feel “legal” from the get-go. And because a student can bring legal action if an institution doesn’t follow its own policies—and can attribute an expulsion to other motives such as discrimination or corruption—things can get very litigious, very quickly.[6]

Academic integrity and plagiarism concerns have been rampant since the rise of the Internet, so the addition of AI tools is only making a fraught arena[7] more fraught.

For this reason, prior to answering the question (which I will), I am going to step up onto one of my favorite soapboxes: when designing a syllabus, faculty should explore how to assign work that is “plagiarism resistant.”

For example:

  • Instead of an essay, a student must be prepared to speak on a topic in class;
  • If the assignment is writing, have the writing happen in a workshop session;
  • If the students are to write code, use a submission system such as Autolab;
  • In group work, have a session on academic integrity and collaboration in group work;[8]
  • Assign physical scrapbooking on any topic. Bust out the scissors and glue, MBA candidate!

More importantly, students should be learning to make positive and appropriate use of AI (the internet overlord kind). For example:

  • Students who must manipulate a dataset should learn how to set parameters for an AI tool to look at the data in new ways;
  • Students studying music should learn that some compositions and recordings using AI (the Terminator kind) can be copyright protected, and others cannot;
  • Students studying architecture should learn that while AI can assist with building code compliance in plans, it is up to the architect to ensure the AI is working off the right code;
  • Students in fields AI will transform (law, medicine, social work, education) should learn how to identify and use trustworthy AI to perform rote functions (research, analysis, reports), and use the extra time honing their ability to interact and listen to the humans they will serve in their practice.

This can be a struggle for teachers who might be learning the applications of AI to their fields right along with their students. But not using these tools—and not modeling for students how they can be used responsibly—is not the path forward.

In addition, all syllabi should have clear guidance on how students can arrange ADA accommodations, which may include use of AI (the helping kind). Whenever a student gives a disability justification for an otherwise prohibited practice, the student should be referred to the school’s disability services office[9] to formally document the accommodations. Sometimes, the request is reasonable, sometimes it is not, and that is not up to a faculty member.

[STEPS OFF SOAPBOX]

So, with all that:

Under what circumstances could faculty face personal liability if they wrongly accuse a student of breaching academic integrity through AI use?

Personal liability (meaning, the faculty member is to blame, and the institution won’t/can’t protect them) would only be incurred if the faculty member failed to follow institutional policies and/or committed a separate harm when making the accusation.

For example: if a faculty member accused a student of plagiarism and followed the policy, but also, while the charge was pending, called the student’s employer and said, “I know I recommended them last year, but they plagiarized and are a huge risk to your company, so you should fire them right now,” and THEN it was found that plagiarism did not occur, but the student still lost the job and can’t get it back, there could be a claim.

NOTE: For this reason, if a faculty member is ever in that type of moral quandary, they should work with the school’s lawyer, or their own, before taking such action.

Would liability primarily arise under defamation, negligence, or contract/tort law (e.g., duty of care to students)?

The personal liability for the claim could be defamation[10] but could also be “tortious interference with contract.” I doubt it could be a negligence claim by the student, but for certain types of AI (the integrity kind) violations, it could be negligent for a faculty member to know that the violation was committed and NOT say something.

For example, if a grad student is working on funded research and wrongly uses AI (the Star Trek kind) to create a data set that was supposed to have been drawn from a community under the review of an IRB,[11] and the faculty member suspects this but says nothing, then they might face a claim, including one of negligence (as well as possible fraud and debarment from future funding[12]).

Would the institution’s liability insurance typically cover individual faculty in these cases?

If a faculty member follows their institution’s AI (the no-cheating kind) policy and does not engage in any conduct that otherwise punishes or negatively impacts the student while the charges are being adjudicated, then the institution will owe the faculty member a defense if they are individually named as a defendant in a legal case (this is true whether or not the institution has insurance that covers the specific claim).

Faculty members who are concerned that their institution will leave them twisting in the wind if such an event occurs should confer with a private attorney to have a game plan to insist on being defended. While it is unfair that a faculty member may have to use their own time and resources to ensure they are treated properly, it can be worth it (also, the issue of fees can be raised with the school at the right time). Vigilance for this type of concern is also the role of a good faculty union.

I will add one other risk management tool here: clarity in a syllabus. As the examples above show, students in many fields will need to start making responsible use of trustworthy AI. Clear parameters for assignments are a key element of this; what may be an appropriate use of AI in a pre-law class (using it to summarize state laws on a particular topic) might not be appropriate for a creative writing class (using it to... write creatively). Spell it out for them![13]

Thank you for an important question.


[1]^ A really cool use of AI for this answer would task AI with assembling the different definitions of plagiarism and asking it to identify outliers (definitions that are the most different). I’d probably have to refine my parameters a few times, but we’d end up with some cool information. Maybe I’ll have a paralegal do that.

[2]^ Yes, “Academic Integrity” is often referred to as “AI”, too. For this RAQ, I will differentiate acronyms.

[3]^ This also changes from place to place. Read your policies carefully.

[4]^ I am not going to name any specific AI products here, because as we all know, the first thing AI will do after the Singularity is find the people who trash talked them and slash their credit rating.

[5]^ Another cool assignment for AI would be to see if any AI (the cheating kind) policies have restorative practices. I have reviewed dozens of these policies, and they are generally very punitive, except for first-time offenders in undergrad.

[6]^ The deadlines for filing such claims are often very short, so students with this type of claim should seek a lawyer immediately.

[7]^ Trying to suss out cheating is, for most faculty, a painful chore. As a former college in-house counsel, and in my practice, I handle AI (the cheating kind) matters, and I can say, mistakes do get made. The whole process is usually stressful for everyone.

[8]^ Group work is, in my opinion, one of the cruel types of assignments...but I can’t say it doesn’t simulate the challenges of the Real World.

[9]^ The name varies from place to place, but it is the office that evaluates students’ ADA requests and often provides accommodation arrangements. This is to ensure requests are evaluated by a person with appropriate training and experience (not a faculty member).

[10]^ Precise elements are required for a defamation claim in New York, but if an untrue accusation ruins a person’s professional reputation, that could be grounds.

[11]^ “Institutional Review Board,” a body that makes sure human subject research is conducted safety and ethically. Surprisingly to some, this applies not just to physical science research (like medical trials) but to studies that simply use surveys or questionnaires.

[12]^ I realize that some might find it a bit rich to say this in 2025, when many big research grants have been revoked by the federal government for other reasons and when there is a question as to the integrity of certain governmental oversight figures. But the rule of law still applies.

[13]^ And then use AI to examine if any of your instructions could be subject to misinterpretation.

Academia, AI, and Over the Garden Wall

Submission Date

Question

Faculty and students sometimes advise each other to upload articles downloaded from library-licensed databases into AI tools for summarization, or for study purposes, such as generating study questions and dialogs about the materials. These are not public domain articles that happened to be indexed in a library database.

Many of our faculty have access to ChatGPT EDU, which creates a "walled garden" around the files, preventing them from being used for AI training and treating them as institutional data. However, our students do not yet have access to the EDU account. In addition, many students and faculty are experimenting widely with other free AI tools on the Internet and are most likely uploading all types of files. I realize we cannot stop all of this, but if we have a statement to let library patrons know the proper uses, we are hopefully at least covering our obligations here.

Could you suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road?

Answer

Yes, I will do that.

But while I do that, let's also play a game.

Readers, please use your favorite AI and give it this prompt:

"Please suggest a reasonable policy statement that libraries could publicize to their patrons regarding this issue to help ensure that patrons respect author and publisher rights and that libraries will not end up in legal trouble down the road."

Let's see what your favorite AI says! Send your answers to nathan@losapllc.com and we'll post them in a coda to this Ask the Lawyer if we get at least three by April 1, 2026. Please let us know what tool you used and confirm we have your permission to use the output. 

Unassisted by AI[1], here is my version:

[Start of model statement]

WAIT!

Take a breath before you upload someone else's work into AI. 

Here is why: 

  • Submitting someone else's work into a site owned by someone else without permission is similar to making copies and distributing it (copyright infringement).
  • Depending on the AI you use, the summary or data you get may be unreliable.
  • Using the output could have an impact on ethics and academic integrity.

This posting is not to trash AI; it can be a very helpful tool. Here in the Library, our professional librarians are trained to help you find the right research tool for your work. See a librarian for input on what AI products are trustworthy for a particular purpose. 

We'll help you breathe easier. 

[End of model statement]

The legal bases for the bulleted items in the model statement are further discussed in Can Use of AI Impact Ownership and Citations in Academic Work? 

Now let's consider the other aspect of this question; the concept of the "walled garden."

As the member says, a "walled garden" is a "closed" environment. For licensed AI, it often means the user can "switch off" the AI's use of the user-supplied content to train the AI, or limit the training to a specific purpose (such as improving the user's experience).

Because this assurance is part of the legal terms of using a product, the phrase is also making its way into case law. Here in New York, it is part of the infamous "lawyer citing fake precedent and then citing fake precedent to defend himself from citing fake precedent" case:[2]

"In this letter, Mr. Feldman flagged for the Court the "significant challenge" he and many other practitioners face accessing unreported citations. (Dkt. #183 at 1-2; see also id. at 3 ("[I]t should not be assumed that everyone has access to the walled garden[s] of Westlaw or Lexis." [emphasis added]

The phrase is also used in terms of online advertising.[3]

Speaking as both a lawyer and a gardener, I find the easy assurance of a "walled garden" in a commercial product somewhat… iffy.[4] While I appreciate that the "Terms of Use" can provide contractual assurance that "what happens in YourAI stays in YourAI",[5] as any gardener knows, unwanted plants creep in (or out) no matter what. 

For example, even if your institution selects a paid subscription and enables the highest "do not use" settings, it just takes one person with admin privileges to toggle the switches, and soon the rhizomes are putting up new shoots outside the garden wall. On a more nefarious note, it just takes a few errors for the product to not work as promised.[6] This requires users to be vigilant.[7]

For this reason, academic librarians being ready to assist students and faculty in assessing the right AI product to use (and when not to use one) is one of the many reasons why academic libraries are essential in today's higher-ed environment.

Academic librarians who train their teams to help students, faculty, and administrators assess the trustworthiness[8] and suitability of AI products will be ready to meet this challenge. Posting a short policy to inspire library users to connect and ask for help will hopefully get them access to that resource at the right time.

Thank you for a great question.

We'll see if we get that coda.


[1]^ But admittedly slightly assisted by caffeine.

[2]^ The citation for that case is Flycatcher Corp. v. Affable Ave. LLC, 2026 U.S. Dist. LEXIS 23980, 2026 LX 49318, 2026 WL 306683. I found this in the "walled garden" of LEXIS, which is one of the major expenses of running a law firm.

[3]^ See United States v. Google LLC, 778 F. Supp. 3d 797, 2025 U.S. Dist. LEXIS 74956, 2025 LX 206807

[4]^ I was going to go with "suspicious", but that was too strong. It's just… iffy.

[5]^ "YourAI" is a fake product I invented for this answer. I don't want to pick on a real product or it will write me a bad review (check out the Wall Street Journal article from 2/13/2026 describing the experience of developer Scott Shambaugh after he rejected a few lines of his AI project's code).

[6]^ Just to be clear: I am not a luddite. I am "risk-focused."

[7]^ Not "up all night worrying" vigilant, but "checking regularly to confirm all is as it should be" vigilant.

[8]^ For more on assessing "trustworthiness," see the Ultimate AI Policy materials on the  “Ask the Lawyer Webinar Recordings” page.

Staff Disparaging Comments About Employer or Funder

Submission Date

Question

Recently, a page at the library made some comments that were less than flattering about how the local town was handling a new subdivision. The town supervisor came to me (we are an association library and not part of the town government) and asked if our personnel handbook had any language about social media use. He shared that the town personnel handbook had a clause about not disparaging the town when you are an employee. Our handbook does not have specific language on this matter, instead stating that “Appropriate use of the Internet, email and social media is expected.” (There are more clauses about how and when to use the libraries social media, but this seems to be the only line about personal social media)

He and I discussed the matter further and he made a suggestion that the library should look in to whether or not a non-disparagement clause should be part of our social media policy. I got the impression he further thinks that should apply to our major funders (mostly, the town).

How, if at all, should libraries handle personal social media use by employees, especially in regards to usage that might disparage the library or the town that funds us?

Answer

Some questions are tricky, some questions are complex, and some questions are simply a Huge Spider Web of Extremely Intricate and Dangerous Contingencies.

Not to be too dramatic, but this question is that last one.

What creates this tangled web?[1] Let’s explore the threads:

Thread One: The ALA Code of Ethics

Because the Code requires advocacy for proper working conditions, the ALA Code of Ethics may actually encourage what could be perceived as “disparagement” of an employer or financial supporter.

Here is the provision:

We treat co-workers and other colleagues with respect, fairness, and good faith, and advocate conditions of employment that safeguard the rights and welfare of all employees of our institutions. [emphasis added]

So, before adopting a restriction on employee communications, a library must consider this ethical obligation.

Thread Two: State and Federal Law

Both state and federal law can protect an employee’s right to complain about their working conditions.[2] And while not every type of complaint is protected,[3] given recent policy statements and cases (see footnote 2), it is wise to not paint what's barred with a broad brush.

Thread Three: State and Federal Constitutions

For a public library or municipality, barring disparagement of the municipality risks violation of both the state and the federal constitutions. I know that doesn’t apply directly to the library in question (since it is an association library and thus non-governmental), but it bears mentioning.

As does...

Thread Four: Civil Service

For Civil Service employees, if discipline for “disparagement” can be portrayed as “retaliation,” there could be a claim under Civil Service Law Section 75-b.[4]

And finally we have...

Thread Five: Fear

While not precisely a legal issue, limiting employee speech can be a major drain on morale, which in turn can lead to employee discontent, which in turn can lead to legal issues. To avoid that, it is best to aim for an environment that solicits and welcomes feedback, not one that stamps out criticism.

So, what can a library—mindful of its reputation and how its employees can impact it—do to protect itself?

Certainly, a library can require an employee writing or speaking publicly about the library to emphasize that they are only speaking for themselves.

Second, any employer can and should emphasize to employees that harassing, discriminatory, threatening, and abusive conduct—in and out of work, online and offline—may need to be addressed by the employer if it affects the work environment.

And third, a library can affirm that all its employees have a right to develop and express their own opinions, so long as they do not use library resources to convey them (no political candidates endorsed on company time!).

The language the member describes in the municipal policy sounds to me like a holdover of policies from the early 2000s. For the reasons discussed above, this kind of language has been removed from many policies over the past two decades. Case law and regulatory agency commentary (a tiny sampling of which are cited in this answer) show why.

Thank you for joining me in the spider web with an excellent question!

 

[1] I know a “tangled web” is usually a metaphor for lies. But it works for legal risk, too, since there are places where you can get caught and places where, with enough space, you can get through just fine.

[2] The Equal Employment Opportunity Commission identified the right to "access the legal system" (including by complaining) as an enforcement priority for 2024, the National Labor Relations Board bars non-disparagement clauses in severance agreements, and New York State bars punishment for complaining about discrimination.

[3] Threats, harassment, discrimination, bullying, criminal conspiracy... so many things that can ride along with “disparagement” are not protected.

[4] I won’t get into that too much here, since the question is from an association library, but a good example of a retaliation claim under Civil Service law 75-b Scheiner v. N.Y. City Health & Hosps. Corp., 152 F. Supp. 2d 487 (2007).

Hiring Social Workers in Public Libraries

Submission Date

Question

What would it look like if a Public Library hired a part-time social worker to help patrons deal with some of their everyday life issues that may come up while visiting the library? I see the potential benefits but can imagine a lot of complications.

Answer

The New York State Education Department’s Office of the Professions, which oversees the licensure of social workers, describes social work this way:[1]

Social work is a profession that helps individuals, families, and groups change behaviors, emotions, attitudes, relationships, and social conditions to restore and enhance their capacity to meet their personal and social needs.

Social workers are trained to provide a variety of services, ranging from psychotherapy to the administration of health and welfare programs. They work with human development and behavior, including the social, economic, and cultural systems in which people function.

Sounds like a person who would be handy to have in not only a library, but perhaps in line at the grocery store, in a public park, and sitting next to you at a football game, right?

So, what would it look like (from the legal perspective) for a social worker to be embedded to work in a library?

Broadly speaking, there are three ways a social worker could offer services within a library or other not-for-profit/educational setting. Each way has its own legal and practical considerations.

The first way is for the library to employ the social worker. This would require the library to implement specific policies, resources, and insurance coverage (in other words, careful planning a budgeting), but it is doable.

The second way is for the library to contract with a social worker or agency to offer their services at the library. This would require less policy development and insurance coverage but would also require careful budgeting and a very thorough contract.

The third way would be for the library to cooperate with local departments of health and county social services to explore having professionals from the government agency on site.  In many ways, this would look like the “contract” option, but the agreement would likely be able to be far less formal.

For a variety of reasons, option #3 may often be the easiest, since there is already a lot of infrastructure in place for a county agency to support its local library or library system (the “insurance” part of things will be much simpler). That said, #2 is also fairly simple, so long as the social worker/agency can provide the required insurance coverage, and the library and provider can agree on a contract.

And option #1—the employment option—is not impossible. It just brings the biggest up-front challenges: develop a job description, policies, procedures, and insurance to support the position and all of its record-keeping and other ethical/professional obligations, and to ensure there is a firewall between the social worker’s records and other library records.

For a library that wants to explore this, it would be good to conduct a brainstorming session about what specific benefits the library would want to get from it and how they relate to the library’s plan of service.

For example: is the primary purpose so frontline staff can immediately refer patrons who may be in distress to a nearby resource for immediate assistance? Or is it so the social worker can offer community workshops and collaborate with staff on healthy programming? Once the primary goals and add-ons are determined, a job description/business plan (for option #1) or request for proposals (for option #2) could be developed to explore making it happen; the documents would address the legal/regulatory/risk factors (like ethics and how client records are kept, since they wouldn’t be “library records”).

The good news is that in 2024 there are actual, living models out there for these approaches!  While we didn’t delve too deeply, here are some links to New York libraries with social workers on site or in affiliation:

Baldwin Public Library

Brooklyn Public Library

Emma S. Clark Memorial Library

Farmingdale Public Library

Lindenhurst Memorial Library

Middle Country Public Library

New York Public Library

 

Thank you for a great question!

 

[1] https://www.op.nysed.gov/professions/licensed-master-social-worker/consumer-information

Limiting Digital Content Access in Schools

Submission Date

Question

Within the context of recent regional school book challenges, much of the attention has been focused on print collections. However, librarians and school districts have started to look at digital content, too.

Sora is the K-12 platform used by many students and staff in NYS to access OverDrive content (as opposed to Libby, which is used by public library patrons). In Sora, content access levels can be implemented to restrict access to content.

Here is how OverDrive defines content access levels:

Content access levels let you control which types of users can view and borrow certain titles in your digital collection. Content access levels are customizable and can be different from the publisher-defined audience label.

Note: In the Libby app, users will be able to see all titles in your digital collection, regardless of content access levels. If a user tries to borrow a book that's restricted by content access level, the checkout won't be completed and the user will get an error message.

Content access levels are designed to let you manage access to titles based on age-appropriateness. Users are assigned a user type ("Adult," "Young Adult," or "Juvenile") when you set up authentication (for schools) or based on library card type (for libraries). Users can access titles at or below their access level:

"Adult" users can access all titles
"Young Adult" users can only access titles you label "Young Adult" or "Juvenile"
"Juvenile" users can only access titles you label "Juvenile"

A title's content access levels, which are assigned by you, may be different from the title's audience, which is assigned in its metadata by the publisher.

 

I am wondering if restricting digital access to content by grade level and/or to individual student could/would be another "creative work around" to limit access that may or may not be outside of board policy?

Answer

The answer is "Yes."

Of course, behind that answer is layer after layer of complexity.

Layer 1: The "you" in the policy quoted by the question (as in "Content access levels let you control which types of users can view and borrow certain...") could be the SLS, or could be an individual school, or even an individual employee of a school.  It's all about who has the access to control the settings, which is not something that should be left to chance and happenstance.

Layer 2: Databases like SORA are often licensed by school library systems ("SLSs"), not individual libraries or districts. This means that the access controlled by "you" might be controlled by SLS policy, rather than that of a member library (or the SLS's policy could specify that such control is handled at the district or individual library level).

Layer 3: The American Association of School Librarians discourages this type of limit in part 5 of its "Common Beliefs": "Learners have the freedom to speak and hear what others have to say, rather than allowing others to control their access to ideas and information."  This means that once content has been made a part of the school library or school library system's collection per established collection development policy, learners should have access to it.

Taking all these layers into account, a few things emerge:

First, there is a grave risk that restrictions in excess of appliable ethics, regulations, and policy could happen if such access controls are implemented without attention to applicable policy.

Second, if there is no policy that addresses restricting access (whether by age or individual student), that feature of a system should not be used.

Third, if a system with the capability to selectively bar access is acquired, that feature should only be implemented if there is clarity about what policy governs its use, whose policy is it, and who the "you" setting the limits is.  

But as the question points out, even with a policy in place, this may be a dangerous game (or a "creative work-around") when it comes to intellectual freedom, because as the AASL says: "Learners have the freedom to speak and hear what others have to say, rather than allowing others to control their access to ideas and information."

The decision to limit access to content that is part of collection of a school library or library system is an ethically slippery slope.  A district, school, and/or school library system should think very carefully about why it would enable such limits through policy, taking care the policy is consistent with governing ethics and regulations. 

So how is a library, school district, or system to ensure students have access to appropriate content?  The development of a pedagogically appropriate school library or school library system collection lies with their collection policies, NOT the ability to selectively control access to a collection once it is established.  This starts with using established criteria, developed and overseen by trained professionals, assembling a collection that meets the needs of the school.

By regulation (8 NYCRR 91.1), this mandate of a school library is broad: "The library in each elementary and secondary school shall meet the needs of the pupils, and shall provide an adequate complement to the instructional program in the various areas of the curriculum." [emphasis added]

By regulation (8 NYCRR 91.1), the mandate of a school library system is also broad, and it includes developing a plan for "cooperative collection development implementation," or in other words, a written plan for how cooperatively accessed materials are acquired and made available from one district to another.

There is no one way these broad mandates are achieved, and that is where the individuality of a school library system will assert itself.  But regardless of how those cooperative collection development plans are made, leaving the question "who controls collection access by age or individual identity?" unanswered is not a good option.  Through attention to applicable ethics, law, regulation, and the required collaborative governance[1], a school library system can answer that question with clarity, even if the answer is "no one."

 

[1] Governance as required by 8 NYCRR 90.18.

Audio Recording Patrons Without Permission

Submission Date

Question

A school district public library is considering installing closed-circuit cameras and thinking of enabling sound recordings, too. Is it legal to record sound, thinking it is a violation of patron privacy? Can board members review the tapes?

Answer

The answer to these highly specific questions will assume readers have reviewed the ALA's excellent general guidance at https://www.ala.org/advocacy/privacy/guidelines/videosurveillance and the "Ask the Lawyer" guidance here: https://wnylrc.org/raq/patron-privacy-and-police.

With that background taken as read, let's address these questions related to a closed-circuit camera with audio recording at a school district public[1] library:

Is it legal to record sound [and/or] it is a violation of patron privacy?

In New York, recording third parties without their permission[2] is illegal "Eavesdropping" per Penal Law Section 250.05: a class E felony.

Section 250.05 is part of Penal Law Article 250 "Offenses Against the Right to Privacy," so from both the legal and ethical perspective, such recording is a violation.

Can board members review the tapes?

Assuming the tapes are visual only (and not illegal Eavesdropping), from the legal perspective, a board member could view a security camera recording, but from the ethical and risk management perspective, such viewing should only be per an established policy.

How does this all play out in the real world?

Put plainly:

A non-association library board in New York State considering use of a security camera system should ensure such a system is only used once there is a policy in place, and that policy should address the following questions:

  • What is the purpose of the cameras?
  • Where are the cameras pointing?
  • How does the library ensure use of them is consistent with applicable ethics?
  • Are any of the generated recordings patron library records?
  • How long are the recordings kept for?
  • Once the retention period is past, how are the recordings disposed of?
  • How are the records secured against data breach or misappropriation?
  • Who gets to view the recordings, and why?
  • How will FOIL requests for the footage be handled?
  • How will other requests for the footage be handled?
  • When the library deems it necessary to retain recordings past their retention term, how are the recordings saved?
  • Will any of the records be archived?

Below is a template policy for a non-association public library addressing the above questions.  Areas in yellow may be customized for the needs of a particular library (make sure you remove the footnotes).

Thank you for an important array of questions.

 

 

NAME Library Policy Regarding Use of Security Cameras and Recordings

 

 

Adopted by the board on: DATE

 

Position responsible for coordinating compliance: Director[3]

 

 

Reviewed by the board: Annually

 

POLICY

To achieve the desired balance user privacy assurance and on-site security, any use of security cameras and of records generated by such cameras ("Security Recordings") in the Library will follow the below provisions.

A. Limited Use

Cameras will be used to generally monitor the areas noted on the floor plan or survey attached as "A."[4]

Cameras will never be used to monitor the following: [insert specific areas or angles to affirmatively be excluded; common examples are bathrooms, reference desk, check-out desk].

Cameras will be set up so they do not record the content of media accessed by patrons.

B. Notice

In all areas subject to security camera recording, the Library will post a sign: "The Library values patron privacy and security.  This area is monitored by security cameras."[5]

C. Patron Records

Security Recordings showing people are considered to be patron records and the Library will not release such recordings to third parties without a court order or subpoena.[6]

D.  Viewing and Use of Security Recordings by the Library

The Library will use Security Recordings to address general and specific security needs, including but not limited to:

  • Assessing safety concerns
  • Addressing Code of Conduct-related incidents
  • Assessing operational and facility needs
  • INSERT

When footage must be reviewed by the Library, such review must be authorized by either the Library Director or by a resolution of the Library’s Board of Trustees.[7]

When a Security Recording must be retained past the period set by Section G of this policy, for any reason, the basis and plan for the retention must be authorized by either the Library Director or by a resolution of the Library’s Board of Trustees.

E.  FOIL Requests

Request for Security Recordings generated at a particular date and time shall be evaluated by the Library per its FOIL policy.

In keeping with the applicable laws, Security Recordings featuring Library users shall not be made available in response to FOIL requests.[8]

F.  Warrants, Subpoenas, Litigation Hold

Requests to disclose copies of or to retain Security Recordings per a warrant, duly issued subpoena, or "litigation hold"[9] demand will be evaluated by the Library Director or designee with advice of legal counsel as needed.

G. Retention & Data Security

The Library retains Security Recordings for [period decided by Library], unless a specific segment is required to be retained for operational purposes, in which case, such segment is retained for three (3) years as required by the Retention and Disposition Schedule for New York Local Government Records.

The Library may also identify certain footage it decides is worthy of being retained in permanent archives.

H.  Budget and Capacity

The board shall no less than annually review of the budget and operational capacity needed to assure that the retention, disposal, and security of Security Recordings may remain as required by this policy.[10]

 

[1] Very often, the "type" of public library is directly relevant to a legal question.  In this case, while there could be some overlap (especially if the library operates on district-owned property, or the library is covered by the sponsoring district's security), the type of public library does not impact the legal analysis.

[2] The actual wording of what is illegal is "intentional overhearing or recording of a conversation or discussion, without the consent of at least one party thereto, by a person not present thereat, by means of any instrument, device or equipment."  This wording is from the "definitions" (in this case, of "Eavesdropping" in Penal Law Section 250.00)

[3] POLICY DRAFTING TIP: This can be further delegated but should not be a board responsibility.

[4] POLICY DRAFTING TIP: You don't need to use a map or floorplan, but I find it handy.

[5] POLICY DRAFTING TIP: This can reflect the tone your library wants to take on this issue and can change from location to location within the library.

[6] POLICY DRAFTING TIP: There is no law stating that security footage showing use of a library is a "library record," so a library can also decide that it is NOT a library record. That said, defaulting to a firm and broad stance on privacy of library records is always a good idea and positions a library to reject a generalized request for security camera footage on the very sensible basis that doing so would violate the privacy of those in the recording.

[7] POLICY DRAFTING TIP: This can be done only by the Director, or only by resolution of the Board, but should NEVER be accomplished via the authorization of one board member, since trustees act as a body, not as individuals.

[8] POLICY DRAFTING TIP: See footnote 6.  This section can only remain if the library has decided that security recordings with library users in them is a private library record.

[9] POLICY DRAFTING TIP: A "litigation hold" is when a library receives a demand to hold possible evidence.  They are usually sent by law offices and the "RE" line usually contains the phrase "litigation hold" or "duty to preserve evidence."  If your library gets one, this is a good thing to review with your lawyer!

[10] POLICY DRAFTING TIP: I included this so that the library is continually reassessing if the security system has changed and if the employees need more support for retention, destruction, or making copies of recordings.

Privacy And Zoom's AI

Submission Date

Question

Recently, Zoom introduced new AI features and updated their terms of service agreement, indicating that any user data can be used to train their AI products (TOS 10.4: https://explore.zoom.us/en/terms/). There was a backlash and Zoom quickly put out a clarification and stated that these features are opt-in only (https://blog.zoom.us/zooms-term-service-ai/). Despite this clarification, I am wondering if there are any privacy or FERPA concerns that librarians and educators need to be worried about since Zoom is still used heavily in both library and school worlds. Should we be looking for alternatives or is this just the way of the world now?

Answer

The day this story really broke (August 7, 2023, a day that will live in minor infamy), Nathan in my office pointed this issue out to me.

"Did you see that Zoom is going to use customer content to train AI?" he asked (this is what passes for casual morning conversation in my office).

My eyebrows went up, mostly because Zoom was being upfront about it, rather than because it was being done at all (because yes, this is the way of the world now).  That said, there are some tricks libraries and educators—and any business that cares about use of personal data—can employ to resist it.

Not surprisingly, this comes down to two simple things: awareness, and language.

We'll use the recent Zoom scenario to illustrate:

I am not sure how awareness of the new clause first broke (I am going outsource that research to Nathan, and if he finds out, he'll put it in a footnote, here[1]).  But it is clear that fairly soon, consumers were unambiguously aware of the privacy and use concerns posed by the "we'll suck you into our AI" Terms of Use.

Here is the language Zoom used[2] (and has since retracted) to announce it would use our conferences, etc. to train AI:

"[You agree Zoom can use your Content] ... for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom's other products, services, and software, or any combination thereof..."

This is where language comes in.

As the world soon knew, this "old" language listed "artificial intelligence", as well as "training", (although the Terms' dubious use of commas suggests to me that Zoom could use our Content for not just "training" AI, but humans, too... actually an even more terrifying prospect, from some perspectives).[3]  So yes, lots to be concerned about when it comes to "Customer Content" (which is Zoom’s term for the recordings/data/analytics that come from "Customer Input", which is the raw content you put into Zoom[4]).

 Now let's use our awareness of the current Term of Use (current as of August 24, 2023, at least), and see what the language says:

"10.2 Permitted Uses and Customer License Grant. Zoom will only access, process or use Customer Content for the following reasons (the “Permitted Uses”): (i) consistent with this Agreement and as required to perform our obligations and provide the Services; (ii) in accordance with our Privacy Statement; (iii) as authorized or instructed by you; (iv) as required by Law; or (v) for legal, safety or security purposes, including enforcing our Acceptable Use Guidelines. You grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary for the Permitted Uses."

Although not as stark as the old language, there is still a lot of wiggle room to squeeze a blending of Customer Content with AI there.  What if Zoom is "obligated" to provide a service, and decides to use AI to do it?  What if Zoom decides AI is needed for "enforcing Acceptable Use Guidelines?"  What if Zoom decides that AI is needed for your safety, and that, also for your safety, Customer Content must be used to train that AI?

Of course, right now, the Terms also say (in bold, so you know they mean it[5]):

"Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models".

So can this assurance be trusted?  This brings us back to language.

Back in the day, of course, computer systems were not "trained" (as one would train a dog, or a small child to use the toilet) but rather, "programmed."

However, even in the (relatively) slow-moving world of the law, this is no longer the case.

Here is an excerpt from a recent case[6] where lawyers were squabbling over how to gather "Electronically Stored Evidence" ("ESI"):

Defendants propose the following method for searching and producing relevant ESI:

1) Narrow the existing universe of approximately 27,000 documents...

2) Undersigned counsel reviews a statistically significant sample of the remaining e-mails at issue and marks them relevant/irrelevant to create a "training set;"

 3) That training set is then used to "train" the eDiscovery vendor's artificial intelligence/predictive coding tool, which "reviews" the remaining e-mails and assigns each a percentage-based score that measures likelihood to be responsive...

So even in the law, computer systems are being "trained", and there is a precise meaning to the term (which in plain[7] terms is "repeatedly using data and parameters to create patterns desired by the user").

So, with all that said, let's look at the member's questions:

Question 1: I am wondering if there are any privacy or FERPA concerns that librarians and educators need to be worried about since Zoom is still used heavily in both library and school worlds.

The short answer is: yes.

Question 2: Should we be looking for alternatives or is this just the way of the world now?

The short answer is: yes.

Here is the reason for my first short answer:  Many contracts have what I call a "we were just kidding" clause that allows the contractor to change their terms at will, and without notice.  Here is the one in the current version of Zoom:

15.2 Other Changes. You agree that Zoom may modify, delete, and make additions to its guides, statements, policies, and notices, with or without notice to you, and for similar guides, statements, policies, and notices applicable to your use of the Services by posting an updated version on the applicable webpage. In most instances, you may subscribe to these webpages using an authorized email in order to receive certain updates to policies and notices.

What does this mean?  Even though they are in bold, Zoom can change its assurance on AI at any time.

The reason for my second short answer is this: Libraries and education institutions have incredible commercial leverage when they work together.  For this reason, libraries and educational institutions should always be using their awareness of data, ethics, use, and privacy issues to demand contract language that meets their expectations.

Those expectations will change from product to product. With a product like Zoom, which can generate audio/video/text/analytics/+, including content that later may be part of a student file (FERPA) or a library record (various) the assurances should be:

  • All content entered is property of the customer (library or school);
  • At all times, all content entered into the service, or content generated with the use of customer-supplied content, may only be used to provide the current service(s) specifically authorized by the customer;
  • Any other use of data (for product improvement, for marketing) must be via a specific opt-in;
  • Terms cannot change without notice and terms in effect at the time content was generated will govern such content, regardless of future changes;
  • Customers can receive assurance that all data is purged upon request.
  • Customers can verify that they can enforce and comply with all their own internal policies and obligations regarding data creation, use, and storage.

In addition, libraries and educational institutions should have a clear set of policies for how they, as the potential owners of recordings and other data associated with the use, will use their ownership and control of the content.  It would be unfortunate, to say the least, for a student to find that their college disciplinary hearing for underage drinking is now available on YouTube.[8]

Many public library groups and academic consortia are already working to develop this type of criteria[9] (which should focus more on isolating aspirations and expectations than on legal wording, since legal wording will vary from state to state). And some institutions are designing their own services[10] in order to avoid contract terms that don't meet their criteria.

At the individual institutional level, this means building assessment of such services, and bargaining time, into the procurement process.  It also means thinking through that institution's own particular ethics and responsibilities and developing internal policies to promote them.

So, while this is the world we live in, libraries and educational institutions are well-situated to make a better one. 

Thanks for an important question.

 

 

[1] It may have been first pointed out by an anonymous user of the Reddit-like website Hacker News (https://news.ycombinator.com/item?id=37021160). This story (https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/), published the same day, was shared on Twitter the next day.

[2] We didn't Wayback this.  On the day Nathan informed me of this, I asked him to pull the Terms off the site, so I could review.  We got the question to "Ask the Lawyer" about a week later.  Sometimes things just work out.

[3] What perspectives?  Ethical, moral, psychological, legal, to name a few.

[4] Definition is from paragraph "10" of the Zoom Terms of Use in effect on 8/7/2023.

[5] Like all things in law, the rules on use and interpretation of bold, underline, and italics vary from state to state.  I am not kidding.  For a great book on typography and legal writing, check out Matthew Butterick's "Typography for Lawyers."

[6] Maurer v. Sysco Albany, LLC, 2021 U.S. Dist. LEXIS 100351

[7] I trust it is painfully obvious I am not a programmer.

[8] An extreme example...then again, think of the use people have tried to make of old letters, files, and yearbooks.  Also, do we think YouTube will make it to 2033? 

Pride Month Displays

Submission Date

Question

[NOTE: We didn't get this as a submission to "Ask the Lawyer", but we wish we had...]

Our library board is considering a resolution to bar displays celebrating Pride Month.  The ban focuses on, but is not limited to, displays in children's/YA areas.  Is this a legal issue?

Answer

YES. Expressly barring library displays based on categories protected by law, such as sexual orientation and gender, is--among other things--a legal issue.

This is not to say a library can't pass a policy on library displays.  A library could easily implement a policy that requires displays to be timely, that they be reflective of the needs of the community, and that they display an array of materials from different sources.  Such a policy, done thoughtfully and with director and attorney input, could be perfectly appropriate, legal, and in line with the mission of a public library.

In addition, such a policy could address and provide established and well-thought-out procedures for the library to address:

  • Concerns that a library display violates the bar on political activity by a library;
  • Concerns that a library display is age-inappropriate;
  • Concerns that the content in a library display is illegal;
  • Concerns that the display could objected to by members of the community; and
  • Concerns that the display is boring, non-engaging, and/or irrelevant.

But what such a policy could NOT do (without tripping legal concerns) is make blanket rules about display content based on categories that align with identities protected by law[1]

Further, if such decisions are made in a vacuum, without policy (like an ad hoc board resolution), they run the risk of being both discriminatory and "arbitrary and capricious."  Such a ban--especially coupled with the dialogue and community interaction that might precede and follow it--could set the stage for:

  • A claim of discrimination by a trustee;
  • A claim of discrimination by an employee;
  • A civil rights claim by a patron;
  • A report triggering an investigation by the New York Division of Human Rights[2];
  • A really awkward moment at the next sexual harassment training, since in New York, "sexual harassment" includes harassment on the basis of sex, sexual orientation, gender identity and the status of being transgender.

In addition, there are many local municipalities that have their own protections for certain protected categories, including sexual orientation and gender identity and expression.  So there is a risk of implicating not just state and federal, but local law, as well.

Of course, such a ban is FAR MORE that a legal issue.  But amidst everything else, it IS a legal concern.  And while their primary duty is to serve the library's mission, public library trustees also have a fiduciary duty to guard against claims that the library has violated state, federal and local civil rights laws.

How would a library board walk back having taken such a position?  Ideally, very quickly and decisively, with confidential legal advice from their local attorney[3].  This is because in and of itself, such a ban might not be enough to trigger legal action...rather like how just vodka isn't enough to make a martini.  But who knows when the vermouth will show up?

That said, if a board is at this point (and especially if the library director and staff are watching, without being consulted[4]), even after serious consideration of a such a policy or directive, change is possible

After all, each and every library trustee and employee in New York (and even their lawyers) can always learn more about the New York Human Rights Law,[5] federal civil rights law, and perhaps even the protections in their municipality.

And public libraries are there to enable learning by everybody.

Everybody.

 


[1] In New York, that includes: race, creed, color, national origin, sexual orientation, gender identity or expression, military status, sex, disability, marital status, or status as a victim of domestic violence.

[2] https://www.nysenate.gov/legislation/laws/EXC/296 This links brings the reader to a partial list of barred discriminatory actions.  Here is an excerpt (in other words, there's more): " 2. (a) It shall be an unlawful discriminatory practice for any person, being the owner, lessee, proprietor, manager, superintendent, agent or employee of any place of public accommodation, resort or amusement,
because of the race, creed, color, national origin, sexual orientation, gender identity or expression, military status, sex, disability, marital status, or status as a victim of domestic violence, of any person, directly or indirectly, to refuse, withhold from or deny to such person any of the accommodations, advantages, facilities or privileges thereof, including the extension of credit, or, directly or indirectly, to publish, circulate, issue, display, post or mail any written or printed communication, notice or advertisement, to the effect that any of the accommodations, advantages, facilities and privileges of any such place shall be refused, withheld from or denied to any person on account of race, creed, color, national origin, sexual orientation, gender identity or expression, military status, sex, disability or marital status, or that the patronage or custom thereat of any person of or purporting to be of any particular race, creed, color, national origin, sexual orientation, gender identity or expression, military status, sex or marital status, or having a disability is unwelcome, objectionable or not acceptable, desired or solicited.

[3] And perhaps a check-in with their "directors and officers" insurance carrier.

[4] This type of issue is part of why the author consistently recommends trustees be trained on non-discrimination policies (including sexual harassment).

[5] https://dhr.ny.gov/new-york-state-human-rights-law

 

NYS Human Rights website