Will AI Improve Education or Strip it of Humanity?

The State of Ohio has used AI to assess student writing on end-of-course exams since 2018.
The State of Ohio has used AI to assess student writing on end-of-course exams since 2018.
Chengyu Li

Our generation of students and teachers has seen artificial intelligence seep into all aspects of our lives. Its incursion into education has sparked both enthusiasm and concern. 

AI could make learning more personalized and teaching more efficient. However, the ethical dilemmas posed by this technology cannot be overlooked.

AI offers many benefits. It can be used to customize educational experiences to meet each student’s unique needs. 

According to a study by the Brookings Institution, AI-driven adaptive learning systems can assess students’ strengths and weaknesses in real-time, offering tailored exercises and feedback that can enhance learning outcomes. 

This level of customization is simply not attainable in a traditional classroom setting where a one-size-fits-all approach leaves some students behind while others are unchallenged.

We understand how customization can be attractive to educators.

AI promises a level of customization that is simply not attainable in a traditional classroom setting where a one-size-fits-all approach leaves some students behind while others are unchallenged.”

Furthermore, AI may allow educators to focus more on teaching and less on paperwork. Automated grading systems, for example, could significantly reduce the time teachers spend grading assignments, providing fast and consistent feedback. On the other hand, many educators do not trust AI’s capabilities, and as reported in this issue of The Beachcomber, experts who use AI in a variety of fields emphasize that the technology still requires humans to review the work.  

The state of Ohio already uses AI to assess student performances on state standardized tests. 

“Ohio expanded to a fully online administration and machine scoring system for all grades and content areas beginning spring 2018,” according to the Ohio Department of Education and Workforce website. “By making this transition, test results [are] available one to three weeks earlier than [they would be otherwise].”

According to the Ohio Department of Education, “The scoring engine is programmed based on student responses to field-test questions that have been scored by humans.” However, there are concerns as to whether or not AI grades bilingual students’ tests fairly and accurately. 

Additionally, a report by the RAND Corporation highlights that AI-powered tools can streamline administrative tasks, from scheduling to communication, therefore increasing overall school efficiency. Specifically, according to the report, teachers use AI to generate content and utilize chatbots, virtual learning platforms, and adaptive learning systems within the classroom.

Despite these advantages, the integration of AI in education raises serious concerns. 

First among these issues is data privacy. AI systems rely on vast amounts of data to function effectively, raising questions about how this data is collected, stored and used. For example, if AI were used to alert parents about their child’s missing assignment or absence, software would need to access the student’s private personal information. Therefore, it may need access to the student’s GPS location, messages, family history, previous attendance records and biometric data.

Many schools use AI algorithms such as Gaggle to monitor students’ computer activity. While these programs are enforced for safety reasons such as analyzing for key words to flag potential mental health crises, they also raise the question of privacy and data security. Beachwood City Schools uses GoGuardian, another AI-powered software, to monitor student activity.

AI-produced recommendation letters would lack the humanity of a letter written by a teacher who has formed a relationship with a student.

An article published in the National Law Review warned that without stringent data protection measures, students’ personal information could be vulnerable to breaches and misuse.  

It is also important to emphasize the role of teachers. While AI can streamline the teaching process, it should not replace the human touch that is vital to education. As students experiencing a fantastic education system, we cannot understate the impact that our teachers have had on us throughout our school careers. 

Teachers provide emotional support, mentorship, and a nuanced understanding of students’ needs. At the end of the day, students and teachers are on a journey together to create new, educated and hopeful individuals for future society. 

The connections that teachers may share with students and the support that they give cannot be replicated by AI. 

For example, do we want teachers using AI to write our letters of recommendation? 

While using AI to write letters of recommendation may save the writer’s time and help with grammar, there are concerns about AI fabricating false information about the student, creating letters that are vague or reliant on stereotypes. Most importantly, a letter written by AI would lack the emotion and detail of a letter written by a teacher who knows and interacts with the student. In other words, AI-generated recommendation letters would lack the humanity of a letter written by a teacher who has formed a relationship with the student. 

Another significant concern is equity in education. There is a real danger that AI could exacerbate the existing educational inequalities. Schools in affluent areas are more likely to afford cutting-edge AI technologies, leaving underfunded schools further behind in education. Or, schools in under-funded districts could end up replacing teachers with less-than-adequate technology.

This digital divide threatens to widen the gap between privileged and disadvantaged students. 

As the Education Trust notes, ensuring equitable access to AI tools is crucial if these technologies are to fulfill their potential in democratizing education rather than deepening disparities.

Furthermore, the ethical implications of AI in education cannot be ignored. The deployment of AI systems that make decisions about student learning and progress raises questions about transparency and accountability. 

Algorithms can perpetuate biases present in their training data, leading to unfair treatment of certain student groups.

Algorithms can perpetuate biases present in their training data, leading to unfair treatment of certain student groups. An investigation by MIT Technology Review revealed instances where AI-driven systems exhibited racial and gender biases, potentially harming students’ educational experiences and outcomes. 

“The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later,” the investigation found. “Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it.”

If educators use AI systems, they need to ensure that they are transparent, accountable and free from bias. And so, while AI does bring about exciting opportunities, it must only be implemented carefully and with further investigation.

Clearly, the integration of AI into schools holds the potential to revolutionize education, but these benefits come with significant risks. 

As educators incorporate AI into classrooms, it is important that they be careful to use it as a tool to strengthen our educational system rather than stripping it of humanity.

Leave a Comment
More to Discover

Comments (0)

All The Beachcomber Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *