Submission Date
Question
I am aware that students are engaging with generative AI inside and outside of the academic setting. If they enter their own work (an essay, research paper, etc.) into Chat-GPT or CoPilot for editing, or other purposes, do they forfeit any of their intellectual property rights in doing so in ways that would affect the future publication of their work?
Additionally, are there any current legal ramifications for failing to declare the use of generative AI or failing to cite AI usage? I am aware of policy, and reputation, ramifications that can vary depending on the exact situation, so I’m specifically curious about if there are any legal repercussions for doing so.
Thank you so much for your help!
Answer
[This answer is not being written by AI].
The short answer to the first question (can use of AI risk intellectual property rights in a way that can affect future publication?) is “Yes.”
The short answer to the second question (can there be legal consequences for failing to disclose use of AI?) is “Yes.”
Unfortunately, after those initial easy answers, the range of risks runs the gamut from “life-shattering” to “none at all.”
To illustrate, let’s take this ridiculously compound hypothetical situation:
A grad student is working on a grant-funded project to study social media use by third graders. The primary investigator[1] has developed a tool to counteract the addictive effects of social media on children; part of the project is testing it.
Because the study involves human subjects and minors, it is governed by a protocol that includes strict safety and confidentiality requirements.
The funder of the research has insisted that the copyright to the research and the final work will be owned by the funder. The PI is hoping to patent the tool being tested.
The grad student is supervising three work-study undergrad students who are working with the test subjects (the third graders). The grad student is getting a stipend of $500 whole dollars for over 500 hours of work and is hoping to be named as a co-author. The undergrad students are paid by the hour.
One day, the grad student assigns the undergrads the task of completing summaries of all of the test subject results. To do this, the 3 undergrads (who are also trying to get through finals) tell a free AI resource: “Create a summary of this information that lists the goal of the study, the methods, the controls, and the results for each subject, removing any identifying information about the subject except age. Also provide a summary of the individual reports, noting when the method applied led to reduction in use of social media, and contrasting that result with control subjects.” They then put the raw data through the AI resource and get 20 hours of work done in less than 1. They don’t tell the grad student, disclose the use of the free AI, or retain any information about the AI product used.
In a “worst-case scenario world” some of the results of this could be:
- Information sufficient to deduce the identity of the test subjects (who are minors) is freely available, creating a risk to their safety and identities;
- The human subject safety and confidentiality requirements of the project are found to have been violated;
- Violation of the protocols limits the number of reputable peer-reviewed journals that will consider publishing the work and jeopardizes future funding for the PI and the institution;
- Years later, the PI’s patent is denied because the submission of the new method to the AI resource counted as publication;
- The copyright requirements of the funder are violated, as substantial portions of the research were provided to the AI without permission, so the funder demands a return of funds;
- The undergrad students are found responsible for academic integrity violations years after graduation and their degrees are revoked;
- As the supervisor, the grad student is also accused of an academic integrity violation but is found responsible only for inadequate supervision of the undergrads.
Of course, this is a worst-case scenario. It is important to remember that for every “worst case” there can be a “best case” where trustworthy AI[2] is used responsibly to enhance research, increase efficiency, and maintain appropriate confidentiality. Such use should be disclosed in the final product and assessed as part of the research methodology.
Responsible use of AI is all about details and planning.
To alert students and others to this potential impact, it is helpful to raise their awareness of how posting to social media[3] and using certain AI products can impact them.
Below this answer is a sample “raising awareness” posting for study areas.[4]
I imagine the academic librarians out there can come up with a snappier version, but this one outlines the above-discussed things to consider before posting research on social media or putting it through AI.
Thank you for some great questions on important topics!
Wait. Before you put your work on social media or put it through AI: Think of your ethics: Does your work involve a code of ethics or professionalism? Think of your obligations: Is the work governed by an IRB or process that restricts disclosure? Think of your privacy: Anonymity on the internet is not assured, even if you don’t claim authorship. Think of your academic integrity: Did any of this work borrow from another in a way that could risk a charge of plagiarism? Think of your copyrights: Do you love this work and want to protect it? Register the copyright before you post or share it through AI. Think of your patents: Did you invent something? Putting it “out there” can limit your deadline for getting a patent to protect your invention. Think of your brand: Is this work a part of your personal or business identity? How do you want to be able to control it? Think of your values: Is the social media platform or AI product consistent with how you think the world should work? Do you want to be a part of it? If you need help finding resources about academic integrity, use of AI, and the rest of this, please visit the Reference Desk. We can help. |
[1]^ “Primary Investigator” (or “PI”) is a term for the lead researcher on a project.
[2]^ “Trustworthy AI” is AI that has been evaluated and found to meet the privacy, security, operability, and interpretability required for a particular project. Every academic institution should have a policy for evaluating the trustworthiness of AI. For more on that, see the Empire State Library Network’s September 2025 presentation, The Ultimate AI Policy for Your (Public, Academic, Museum, etc.) Library, on the “Ask the Lawyer Webinar Recordings” page.
[3]^ I add “social media” because there is a lot of overlap between the risks, and while younger people are now somewhat savvier about some of those risks in 2026, it is still good to educate people about them.
[4]^ And, perhaps, bathrooms, where it could be handy reading material.