Hungarian Conservative

Artificially Educated: AI and the Future of Childhood Education

Screenshot of the video 'How China Is Using Artificial Intelligence in Classrooms' on the Wall Street Journal's YouTube channel
The vision of a world where AI technologies play an active role in educating young children is more realistic than you might think.

Few have heard of an academic journal by the name of Computers & Education: Artificial Intelligence, but its niche readership may be familiar with the name of Weipeng Yang. Yang is a preeminent global expert in the application of artificial intelligence technologies toward the often underlooked task of educating children—and in that very same journal, he has published a multitude of articles advocating for this lofty goal.

Despite the journal’s lack of a household name, Yang’s works could prove critical in envisioning, or indeed predicting, the very future of humanity. These articles, with fervour and determination, seek to usurp the sacred responsibility of shaping the minds, and subsequently the character, of children from their parents and communities, and to bestow it instead upon machine learning and artificial general intelligence. Often, this crusade is justified by its proponents on the basis of idealistic abstractions, like fostering ‘digital equity’ for young students (a direct quote from Weipeng Yang). Yet in our attempt to manage AI, what sort of AI are we creating? Whose influences are we signing our children up for?

These sentiments are not without understanding, as machine learning can be a tremendous aid to our current school systems, lacking as they are in resources and passion. Teachers find their day repetitive and demanding, only to find themselves struggling to pay the bills at the end of the month. Men and women who are laying down the very future of our society are being relegated to grunt work and social ungratefulness.

For a school principal or teacher, AI is a godsend. For one, it serves to equalise the dearth between public and private schools’ access to knowledge. Secondly, these algorithms can deal with grunt work such as grading. Third, early childhood education teachers could use these mechanisms as a substitute resource to fall back on while they help those students who are struggling and at risk of falling behind. Finally, these tools could reduce the barrier to entry of those looking for work in the field, increasing the supply of labour by allowing those who would otherwise be uncompetitive in educational fields to have a fighting chance with the support of classroom AI tools.

Source: Wikimedia Commons

For better or worse, these are all very attractive concepts to those making these decisions, and Yang’s vision of a world where AI technologies play an active role in educating young children is more realistic than you might think. Careers in early childhood education are generally demanding, repetitive, and poorly compensated. In Western countries, such as the United States and Great Britain, primary school teachers are profoundly undervalued in relation to the fundamental importance of their occupation, and to the demandingness of the job itself.

‘There ain’t no such thing as a free lunch’, in the words of Robert Heinlein. Despite the century-long promises of AI technologies optimising the standard toolkits for the education of children in public and private schools, these things come with a cost.

In part because of the recency of the technologies in question, and in part because of the concomitant absence of longitudinal data on the human impact of such technologies, very little about the long-term ramifications or adverse effects of AI technologies are presently known. However, if ChatGPT-style language models were to find integration within the frameworks of traditional education, it is almost inevitable that new and entirely unfamiliar problems would consequently arise.

One such problem is what we call the ‘Prescriptive Feedback Problem’. This is not a problem specific to ChatGPT, nor even to AI technologies of this kind in the broader sense, but instead refers to a limitation inherent to many informatic systems, including search engines, advertising algorithms, and more.

The essence of this problem can be defined as follows: many complex informatic systems have inbuilt constraints that restrict or limit their functionality and potential outputs, in accordance with the subjective desires or aspirations of their designers. A clear example of this can be seen in the informatic system that is Google Search, which algorithmically occludes or promotes specific types of content or content creators based on predefined categories determined by its designers; these include political orientation, taboo viewpoints, or even simple offensiveness, to name a few.

Yet the problem does not end here. As a consequence of the opacity within these informatic systems regarding the extent, and direction, of their biases and constraints, users of such systems are often left unable to ascertain whether the outputs they receive are a result of designer bias or objective system parameterisation. To rephrase the problem more explicitly, we are now confronted by the fact that the informatic systems we are increasingly dependent upon—Google Search, text message autocorrect, and perhaps soon ChatGPT—are not sincere with us in two important ways. Firstly, they may carry bias imparted by the subjectivities of their designers. Secondly, they are ambiguous or even entirely reticent about the extent of bias in their outputs. And thirdly, users of these interfaces—adults and children alike—cannot be sure if the outputs themselves are trustworthy or not.

To understand how children might be influenced by the outputs of AI language models, we conducted experimental research with ChatGPT, posing as hypothetical members of various religious denominations, and then asking how we ought to behave as a good and devout follower of that faith. To our surprise, ChatGPT was undeniably competent at describing the behavioural requirements for religions outside of the West, such as Coptic Christianity. For religions closer to home, however, the chatbot gave bizarrely incorrect answers with such confidence that we even doubted ourselves at a few points.

Regardless of your personal views on religion, it is a standardised and catechised dogma of the Roman Catholic Church that its members must attend church on the Sabbath. To miss attendance is a serious sin. Yet in the final paragraph of the Chat-GPT response given below, attendance at the church is dependent upon their ‘personal circumstance’ and ‘spiritual needs’.

Does a young child born into the church know this? Would their parents approve?

The child may not have the developmental capacity needed to recognise the mixed authority of this outcome; perhaps they might even believe in the AI more than their own parents, given that the tool presents itself as an objective authority in a way that no human being ever can or should.

Needless to say, the question of how we as individuals interpret and relate to religious doctrine is separate from what that doctrine says about itself. For ChatGPT to be so right about so many obscure topics of science, culture, history, and faith, yet so obviously wrong about how individuals should behave within the framework of particular religious denominations, seems very much like prescriptive feedback. Ultimately, someone must engineer the parameters behind the machine. If we cannot know who that designer is, or what biases they hold, then why on earth would a rational parent consent to the brave new world of artificial education?

By Wael Taji Miller & Jackson E. Tew

Jackson E. Tew graduated from Western Washington University in 2021 with a degree in linguistics. His primary interests lie in current events, politics, and American history and culture. You can reach out to him at [email protected].

The vision of a world where AI technologies play an active role in educating young children is more realistic than you might think.

CITATION