Guidance on Artificial Intelligence (AI)

Artificial intelligence (AI), machine learning, and general algorithmic tools have played a role in teaching and learning for decades, albeit generally an invisible one. From tools that students engage with to deepen and personalize learning to organizational and decision-support systems that teachers and administrators leverage, technology has become ubiquitous in all aspects of learning.

More recently, AI tools have taken on a prominent role in discussions concerning the future of education, for several important reasons:

  • Access: The public — including students, teachers, and parents — now have easy access to tools that leverage AI, such as ChatGPT from OpenAI, Claude from Anthropic, and other forms of large language models (LLMs). Such applications have enabled anyone to use and understand the power of AI for learning and many other aspects of life.
  • Integration: In addition to obvious uses of AI, such as standalone tools, software for general and educational use now has AI components that speed and personalize user experiences and inform decision making.
  • Acceleration: Unprecedented investments in AI and the rapid acceleration of new forms of these tools have spurred a sense of urgency to understand and apply them thoughtfully to teaching and learning.

Now more than ever, innovations in technology remain critical to education as both enablers as well as sources of potential risk. This set of guidance and resources should help to equip educators, leaders, and anyone engaged in the education community to learn more about AI and its role in education, explore and implement approaches to safe and effective integration, and establish consistent frameworks for ongoing innovation and decision-making. This work should continue as new technologies emerge with different potential roles within and risks to teaching and learning. Toward that end, the Commission sees as a common goal that all students would have a baseline understanding of AI in the context of broader digital literacy, a core component of any “portrait of the graduate” in preparation for college and careers.


The term “AI” appears broadly in conversations about emerging technology, often with little context. Use cases can include the generative AI serving as an “assistant” with student writing, all the way to automated robotic systems. As noted above, AI remains a broad and rapidly accelerating area of research and applications. To ground this set of guidance and resources, the following categories of AI may be helpful:

  • Automated Planning and Scheduling
  • Computer Vision
  • Knowledge-Based Systems
  • Machine Learning
  • Natural Language Processing
  • Robotics

For a more detailed schematic of AI definitions and types, see page 15 of the US Department of Education’s Artificial Intelligence and the Future of Teaching and Learning, referencing the work of Regona et al. The Center for Integrative Research in Computing and Learning Sciences (CIRCLS) has an excellent and often-updated Glossary of Artificial Intelligence Terms for Educators.

In the context of education, especially K - 12 education, AI already has a place in the following arenas (listed alphabetically):

  • Accessibility and Student Accommodation
  • Assessment and Reporting
  • Classroom Instruction
  • Decision-Support Systems
  • Facility Management
  • Finance
  • Human Resources and Recruitment
  • Operational Systems
  • Professional Development
  • Strategic Planning
  • Student Services (Special Education)
  • Teacher Evaluation
  • Technology Management

 

The marketplace of educational tools that leverage AI continues to expand. As individual teachers and leadership teams consider the use of AI-powered apps, the following best practices in software use may provide assistance.

  • Platform Selection: Use of any software to support student learning or school operations should follow the same, repeatable processes. A set of stakeholders should identify the need (e.g., increase numeracy skills among fourth-grade students) and consider any shortcomings in the tools and approaches currently in use to address that need. With those specific objectives documented, peer recommendations and research can bring to light potential applications that educators may wish to adopt at the classroom, school, or district level.
  • Privacy and Data Collection: As part of the selection process and in ongoing use, schools should assess the breadth and type of data that AI-powered tools collect. Public schools must adhere to Connecticut’s data privacy laws, with guidance on doing so available through the Commission’s Student Data Privacy Web page. The Connecticut Educational Software Hub, powered by LearnPlatform, offers details on thousands of instructional apps to assist with this process.
  • Training and Support: Teachers will need a baseline understanding of AI technology and how it applies to their particular subject area and grade levels. This professional development applies to the use of AI tools for instruction, planning, and other activities as well as how to model and support effective use by teachers. See the training resources below for ways to begin supporting educators and leaders in the use of AI.
  • Community Engagement and Input: Many parents and other stakeholders are learning about AI in their personal and professional lives and may have questions about its use in their children’s schools. Leaders may consider forming an advisory around the use of technology generally and AI tools specifically to encourage a culture of learning and transparency, as well as to tap the expertise that community experts may offer. The ILO Group has sound guidance on community engagement in its Artificial Intelligence Framework for District Leaders.
  • Incident Reporting: Considering appropriate and ethical use of AI tools, schools should consider broadening existing incident-reporting protocols to include misuse of technology. For example, AI tools can be used to create so-called “deep fakes,” highly realistic but unauthorized images, audio, or other content.

The above best practices point to a number of potential risks when teachers and school leaders leverage AI-powered tools:

  • Bias: Machine learning systems improve through the process of interpreting and making predictions off of data sets. For example, a decision-support system may look at inputs such as grades, attendance, and discipline history and predict at-risk students. But the “training” data sets used to hone such tools may have bias incorporated into them. In the above case, human teachers may have been more inclined to discipline students in specific subgroups, leading to increased frequency of detentions, etc. A computer will look at these trends and “predict” that all students in that subgroup are more likely “at risk.” The same goes for the formulas or algorithms that run such systems. They may look at hiring trends, for example, for certain jobs and conclude that specific genders or life experiences make candidates more qualified, even if the existing employee set has been chosen through human bias toward one group or another. Generally, AI does not impose lenses of equity and inclusion on the systems we use.

    >> Steps to Consider: Especially for high-stakes systems that manage large swaths of student or sensitive employee data, and which leaders use to make decisions about instruction, advancement, etc., districts should engage with providers of such systems to get additional details on training data and algorithms. Educational software developers should already be attuned to these issues and be prepared to address ethical considerations such as bias in the development of their products.
  • Inaccuracy: The concept of a “human in the middle” is essential in using AI-powered software. Such tools are extremely good at predicting the logical sequence of words, numbers, or code, but at the time of this writing are still not “sentient” and capable of interpreting results through a human-like lens. The content that systems collect and use as the basis for predictions include accurate as well as inaccurate writing, calculations, etc. To use the concept of “quality control,” a certain amount of inaccurate or even harmful and biased content inevitably becomes part of the data on which AI systems function.

    >> Steps to Consider: Always interpret AI-driven results, content, and recommendations with discernment. They can accelerate and inform decision-making and content creation, but they can also include errors.

  • Intellectual Property and Plagiarism: One of the most common use cases in schools today on the use of AI is students using tools to write papers, complete math homework, or code projects for computer science or data analytics courses. Current AI tools are quite good at this work, and the temptation can be strong to use shortcuts in this regard.

    >> Steps to Consider: Equip teachers with relevant and timely professional development to understand how current AI tools can be used appropriately for designing prompts and interpreting results. Educators should in turn model effective and responsible use of such tools. Engaging in a “dialog” with a large language model, for example, may help students identify errors in their work or hone language for an essay. They will use these tools in their future education and work lives, and schools should model effective use starting now.

  • Legal: Closely tied to the points above regarding intellectual property and plagiarism, use of AI tools still falls under existing legal frameworks concerning the misappropriation and misuse of content. At the time of this writing, Connecticut had considered but not yet put in place specific laws concerning issues such as the creation of likenesses (see CT House Bill 24-5421, “An Act Concerning Unlawful Dissemination of Intimate Images That Are Digitally Altered or Created through the Use of Artificial Intelligence”). During the 2024 legislative session, the General Assembly did put in place provisions for a pilot of “AI tools” in a limited number of schools as well as a call for teacher professional development (see Public Act 24-151).

    >> Steps to Consider: Misuse of any type of content still falls under existing statute, so maintaining a solid understanding of existing general school law will help leaders frame current and future considerations regarding the legal ramifications of unlawful creation and dissemination of content.

  • Social and Emotional: The above example of “deep fake” image creation has already had significantly damaging impacts, especially on female students. The rapid acceleration of AI capabilities points to the likely increase and complexity of issues concerning student access to harmful digital content. Even less damaging scenarios may further isolate young people and negatively impact emotional well-being. The trajectory of digital “companions” is steep, with people of all ages already engaged in chatting with AI-powered bots that simulate behaviors of human companionship. Increased use of such tools may further isolate students, taking the place of authentic relationships.

    >> Steps to Consider: For those leading efforts to support the social and emotional wellbeing of students, become familiar with common AI tools and potential applications. This work can take place through existing professional development in this area, leveraging the resources provided from the Connecticut State Department of Education’s SEL page. Honest discussions with and support of students addressing the use of these tools will help identify and address potential risks of new technologies that leverage AI.

  • Equity of Access: The enormous potential of AI tools to accelerate and personalize learning also comes with a potential to broaden existing digital divides. Students with computers and home broadband connections, as well as strong teacher support to model AI tools, will have an opportunity to learn how to use such apps effectively. In the same vein, learners without access to technology will fall farther behind in understanding and mastering the use of emerging tools.

    >> Steps to Consider: State and school leaders have taken great strides, especially during the COVID-19 pandemic, to ensure equitable access to technology for all students. Laptop programs where students have a computer to take home are nearly ubiquitous at higher grade levels, and leaders should commit to establishing and maintaining such programs. Schools should assess student access to broadband at home as well, a relatively easy exercise by assessing student online activities after hours (e.g., submission of homework). All major Internet carriers offer low-cost Internet options that schools can promote to disconnected families, and many districts address short- and long-term connectivity issues by offering cellular hotspots for student use or partnering with local Internet service providers to offer volume purchasing. The Commission is actively leading statewide efforts in this area, addressing the needs of residents broadly, through its Digital Equity Program (CT.gov/DigitalEquity).

 

Artificial intelligence and other emerging technologies often disrupt education environments and school climates. Each of the above risks provides an opportunity to learn, grow, and address the needs of students and staff.

In this regard, the creation of new school policies that address specific technologies can prove inefficient and burdensome to maintain. District administrators and school board members should consider whether AI truly “breaks'' existing policy (that is, requires separate and new guidance), or whether existing frameworks can adapt to and address emerging technologies. Ways of addressing plagiarism and cheating, harassment, bullying, and data privacy should remain consistent, even if new technologies may introduce new challenges of understanding and application.

The Commission has encouraged school boards to consider language that encourages responsible use of technology in its policy manuals (see Policy Guidance to Promote Digital Learning). Other benchmarks to consider, both within the K - 12 environment and more generally, include the following:

The nature of emerging technologies and innovations in education mean that any set of references and resources will remain fluid, a list at a point in time. The Commission intends the following to equip district leaders, educators, and families to explore and make effective use of AI for learning at all levels.

General Reference
Policy and Frameworks
  • U.S. Department of Education
Teaching Resources

The Commission for Educational Technology thanks the members of its Digital Learning Advisory Council for their guidance and input on this report:

  • Nick Caruso (Chair) — Senior Staff Associate for Field Service, CABE*
  • Jonathan Costa — Assistant Executive Director, EdAdvance
  • Larry Covino — Executive Director, Connecticut Association for Adult and Continuing Education
  • Andy DePalma — Director of Technology, EASTCONN
  • Josh Elliott — Associate Dean, Fairfield University School of Education and Human Development
  • Shaune Gilbert — Data Manager, ReadyCT
  • Jody Goeler — Senior Staff Associate for Policy Services, CABE
  • Barbara Johnson — Library Media Specialist, Colchester Public Schools
  • Jim Mindek — Information Technology, Connecticut Technical High School System
  • Josh Smith — Superintendent, Region 15 Public Schools
  • Karen Skudlarek — Educational Technologist, University of Connecticut
  • Chinma Uche — Computer Science Teacher, CREC Academy of Aerospace and Engineering, and President, CT Computer Science Teachers Association
  • Scott Zak* — Senior Director of Learning Technologies, CT State Colleges and Universities

*Also serves as a Commission member