Bookmark and Share
 

Legal Updates

Generative Artificial Intelligence: Legal Considerations For Independent Schools

Rarely has a technological innovation demanded the immediate attention of educators as much as the emergence of generative artificial intelligence (“Generative AI”). Schools have faced the challenge of addressing the use of Generative AI – in particular, the ubiquitous ChatGPT – and have had to consider the impact of Generative AI on the learning process. There are many legal issues for schools to consider as well.

What Is Generative AI?

Generative AI uses existing resources to create new content, such as images, text, and audio. Generative AI has been around for decades, but advances in the past ten years have led developers to improve “learning” algorithms that use information from the internet to create more complex and coherent content.

In November of 2022, the research lab OpenAI introduced ChatGPT, a language model that is able to use Generative AI to – among other things – respond to complex questions in ways that closely mimic human responses. ChatGPT and other Generative AI interfaces, such as Dall-E and Bard, are user-friendly, allowing anyone who has access to a smartphone to generate essays, visual images and audio; translate text from one language to another; and solve complex math and science problems.

Use Of Generative AI In Schools

Generative AI has many potential applications for the school environment. For example, Generative AI can act as a virtual tutor or learning aid, and can be used as a tool for providing instantaneous and detailed feedback to students, designing worksheets and quizzes, and communicating with parents.

At the same time, schools have recognized some of the potential risks that Generative AI can pose, and are confronting in real time the issue of how Generative AI can be used to enhance – rather than stifle – the development of students’ critical thinking skills. Consequently, we have fielded many questions from our school clients regarding how to approach the issue of plagiarism, specifically students attempting to pass off work produced by Generative AI as their own. Because each response produced by programs like ChatGPT is unique – even when given the same prompt – it is extremely difficult to detect whether a student is doing their own original work or relying on Generative AI.

The use of Generative AI by students presents concerns beyond plagiarism, such as issues related to student privacy. Like many other automated interfaces, Generative AI platforms collect demographic and/or personal information about users. As discussed below, the ability of minors to access resources that collect such information may be limited by various laws and site-specific standards. In addition, content generated by AI may include biased or even demonstrably false responses, which can be detrimental to student learning.

Likewise, the use of Generative AI by teachers and other employees creates concerns related to: (a) biases that may be inherent in the algorithms used to generate content; and (b) the possible generation of content containing misinformation. Thus, the use of Generative AI to create lesson plans could increase the potential that classroom lessons will include inaccurate information. Perhaps even more concerning, when Generative AI is used to create tests and quizzes or to provide feedback on student work, biases in the Generative AI algorithms could lead to unfair – and potentially even discriminatory – grading.

Student Handbook Policies

Whether a school is embracing Generative AI or is less inclined to encourage its use, Generative AI cannot be ignored, given the rapid emergence and prevalence of the technology. Therefore, schools should consider revising relevant policies and procedures so that their communities have clear guidance regarding acceptable versus unacceptable use.

For example, a school might decide to ban student use of Generative AI altogether; allow its use only for certain types of assignments or to answer certain types of questions; allow its use only with specific permission from the teacher; or allow its use unless expressly prohibited by the teacher.

Clarifying the school’s current expectations with regard to the use of Generative AI can help to avoid ambiguity that could otherwise lead to disciplinary issues or even a legal dispute. We recommend that schools review and update their relevant student handbook polices in light of the emergence of Generative AI. Such policies may include the student code of conduct, the academic honesty policy, and the acceptable use of technology policy and/or agreement.

A school might also consider adopting a separate artificial intelligence policy that clearly explains the school’s definition of acceptable use of Generative AI, provides examples of permissible and impermissible use, prescribes a process to be followed if a student is suspected of impermissible use, and clarifies expectations regarding proper citation when Generative AI resources are used and how the school may respond to potential misuse.

Because this technology is developing rapidly, and the ways that Generative AI is used both inside and outside the classroom are likely to develop at a similar pace, schools should regularly review these types of policies to make sure that the policies continue to provide clear, relevant, and useful guidelines.

Additional Legal/Policy Considerations

Together with the policy changes outlined above, we suggest that schools consider informing parents about how they plan to address students’ use of Generative AI. The communication to families would ideally explain how Generative AI is currently being used, as well as the safeguards the school is putting in place to promote permissible use and minimize potential negative impacts of Generative AI.

In setting up these safeguards, schools should keep in mind that legal standards and/or platform-specific guidelines may apply to certain uses of Generative AI. For example, federal law sets standards regarding online platforms that collect personal information from students under the age of 13. The federal Children’s Online Privacy Protection Act (“COPPA”) allows schools to consent to the use of these online platforms on behalf of parents; however, being transparent with families about this use is strongly advised, both through communications and by having parents formally acknowledge this use in the enrollment agreement or in a stand-alone document. Seemingly in a nod to COPPA, ChatGPT prohibits individuals under the age of 13 from using the platform. Therefore, if a school serves primarily students under the age of 13, it should consider a policy that bans Generative AI (or certain Generative AI platforms) for school-related purposes and on campus.

Before directing students to use any specific resource – particularly an emerging technology like Generative AI – educators should also be aware of any platform-specific age guidelines. For example, some services and websites openly declare that they are intended for individuals age 18 or older, so it is not recommended that schools direct any minor students to use such sites. For platforms like ChatGPT that require parental permission for use by students ages 13-18, communication with families and proper documentation regarding consent are important.

In addition, schools should consider clarifying expectations with regard to how teachers and other employees may – and may not – use Generative AI in their work, including in lesson planning, developing tests and quizzes, providing feedback to students, and communicating with students and families. If schools do allow employees to use Generative AI, they should consider communicating clearly their concerns (including regarding potential bias and discrimination) and expectations. These concerns and expectations might be conveyed in the employee handbook, in other written communications with employees, and/or during faculty meetings.

For example, schools might choose to clarify that employees will be held to the same standards for quality – with regard to the classroom experience, feedback to students, and communications with families – regardless of whether a particular employee chooses to use Generative AI for their work or not. Schools that already have a policy in their employee handbook regarding employee technology use should consider incorporating content addressing Generative AI.

Conclusions

Generative AI is a disruptive technology that – for good or ill (likely both) – is here to stay. Calculators were once banned from schools; eventually, educators embraced the technology and were able to use them to improve student learning.

We are likely in the middle of a similar – though much more rapid – evolution with Generative AI. When ChatGPT went live last November, many schools immediately sought to ban it. Since that time, however, the trend has been for schools to seek to find ways to leverage Generative AI in the classroom to develop savvy and responsible users.

The degree to which any particular school decides to incorporate Generative AI will depend on a variety of factors, some of them cultural, some of them pedagogical. Whatever a school’s views on Generative AI, it is important to begin to address these issues now, and to develop policies that clearly reflect the school’s expectations.

* * *

If you have any questions about Generative AI, or if Schwartz Hannum PC can be of assistance in preparing or updating your school’s policies and procedures surrounding the use of this technology, please feel free to contact one of the Firm’s experienced education attorneys.