Risks of AI Foundation Models in Education

Authors: Su Lin Blodgett and Michael Madaio


In response to “On the Opportunities and Risks of Foundation Models” (Bommasani et al., 2021)

If the authors of a recent Stanford report [Bommasani et al., 2021] on the opportunities and risks of “foundation models” are to be believed, these models represent a paradigm shift for AI and for the domains in which they will supposedly be used, including education. Although the name is new (and contested [Field, 2021]), the term describes existing types of algorithmic models that are “trained on broad data at scale” and “fine-tuned” ( i.e., adapted) for particular downstream tasks, and is intended to encompass large language models such as BERT or GPT-3 and computer vision models such as CLIP. Such technologies have the potential for harm broadly speaking (e.g., [Bender, Gebru, et al., 2021]), but their use in the educational domain is particularly fraught, despite the potential benefits for learners claimed by the authors. In section 3.3 of the Stanford report, Malik et al. argue that achieving the goal of providing education for all learners requires more efficient computational approaches that can rapidly scale across educational domains and across educational contexts, for which they argue foundation models are uniquely well-suited. However, evidence suggests that not only are foundation models not likely to achieve the stated benefits for learners, but their use may also introduce new risks for harm.

That is, even if foundation models work as described, the nature of their design and use may lead to a homogenization of learning in ways that perpetuate the inequitable status quo in education [cf. Madaio et al., 2021], and which may lead to increasingly limited opportunities for meaningful roles for educational stakeholders in their design. In addition, the all-encompassing vision for foundation models in education laid out by Malik et al. may privilege those aspects of education that are legible to large-scale data collection and modeling, further motivating increased surveillance of children under the guise of care and further devaluing the human experiences of learning that cannot be ingested into foundation models.

Risks of educational technologies at scale

While perhaps well-meaning, the argument that developing large-scale models for education is for students’ benefit is an argument that is often used to motivate the increasing (and increasingly harmful) use of educational technologies, particularly educational surveillance technologies (cf. Collins et al., 2021). Recent research suggests that equity is used by educational AI researchers as a motivation for designing and deploying educational AI, but it is rarely studied as a downstream effect of such technologies [Holmes et al., 2021]. Indeed, educational AI, including computer vision and natural language technologies, are increasingly deployed to monitor students in the classroom [Galligan et al., 2020], at their homes during high-stakes assessments [Cahn et al., 2020; Swauger, 2020; Barrett, 2021), and to monitor their digital communication across platforms [Keierleber, 2021]. These applications are often motivated by appeals to care for students’ well-being [Collins et al., 2021], academic integrity [Harwell, 2020], or supporting teachers [Ogan, 2019]; and yet, these noble goals do not prevent these technologies from causing harm to learners. As we argue in a recent paper [Madaio et al., 2021], the fundamental assumptions of many educational AI systems, while motivated by logics of care, may instead reproduce structural inequities of the status quo in education.

In addition to students’ well-being, the argument for developing foundation models for education relies on economic logics of efficiency — that given the rising costs of education, approaches are needed that can provide education “at scale”. However, this argument has been made many times before from educational technologists and education reformers, with outcomes that have not, in fact, benefited all learners. It is thus worth taking seriously the lessons of historical arguments for using technology to provide education at scale. Historians of educational technology such as Larry Cuban and Audrey Watters have argued that a century of educational technologies supposedly designed to provide more efficient learning (e.g., educational radio, TV, and computers) have instead led to widespread second-order effects, such as promoting more impersonal, dehumanizing learning experiences and further de-valuing the role of teachers, counterbalanced by teachers’ widespread resistance to the adoption and use of these technologies [Cuban, 1986; Watters, 2021].

Indeed, one can look to recent claims about Massive Open Online Courses (MOOCs) to provide education for learners around the world who could not afford to access a university [Pappano, 2012]. Although this may have been true for some learners, the vast majority of learners who used and completed MOOCs were already well-educated learners from the Global North, and the majority of courses on major MOOC platforms were offered in English by North American institutions [Kizilcec et al., 2017]. Even for learners who had access to them, the design and use of MOOC platforms thus amplified particular ideologies about teaching and learning, including an “instructionist” teaching paradigm with the use of lecture videos that learning science research suggests is less effective than active learning [Koedinger et al., 2015] and which may not be effective across multiple cultural contexts. More generally, other (non-digital) technologies of scale, such as educational standards (e.g., Common Core State Standards) and standardized testing such as the SATs act as “racializing assemblages” [Dixon-Román et al., 2019] that may reproduce sociopolitical categories of difference in ways that reinforce long-standing social hierarchies.

These histories teach us that we should critically interrogate claims about the ability for technology to provide education at scale for all learners, much like claims about the benefits of technologies of scale more generally [Hanna and Park, 2020]. In addition to interrogating the potential harms of the scalar logics of foundation models, several of which we identify here, we also suggest interrogating who benefits from this drive to scale, and what alternatives it forecloses. Devoting time and resources towards educational technologies built atop foundation models not only diverts our attention away from other educational technologies we might develop (or the question of whether we should develop educational technology at all), but further entrenches the status quo, allowing us to avoid asking hard questions about how existing educational paradigms shape learning processes and outcomes in profoundly inequitable ways [Madaio et al., 2021].

Risks of homogenization

The adaptability of foundation models, where a few large-scale pre-trained models enable a wide range of applications, brings with it particular risks of homogenization1 for educational applications. That is, design decisions for foundation models —- including decisions about tasks, training data, model output, and more — may lead to homogenization of pedagogical approaches, of ideologies about learning, and of educational content in ways that may perpetuate existing inequities in education [cf. Madaio et al., 2021], particularly when such technologies are intended to be deployed “at scale” across contexts.

Specifically, the choices of data proposed for pre-trained models for education smuggle in particular ideologies of teaching and learning that may promote a homogenized vision of instruction in similar ways as previous technologies of education at scale reproduced instructionist modes of teaching2 [Koedinger et al., 2015], which may lead to downstream harms for learners, despite claims for these technologies’ benefits. For example, Malik et al. propose using feedback provided to developers on forums such as StackOverflow to train feedback models, but the feedback provided on such forums may not be delivered in pedagogically effective ways, and often reproduces toxic cultures of masculinity in computer science in ways that actively exclude novice developers and women [Ford et al., 2016].

Finally, much as Bender, Gebru, et al., 2021 have observed for NLP training corpora more generally, the corpora suggested by Malik et al. for training foundation models, such as Project Gutenberg, include texts largely written in English [Gerlach, 2020], which may produce representational harms (cf. Blodgett et al., 2020]) by reproducing dominant perspectives and language varieties and excluding others. Similarly, Dodge et al., 2021 have found that a filter used to create the Colossal Clean Crawled Corpus (C4, a large web-crawled corpus used to train large English language models), “disproportionately removes documents in dialects of English associated with minority identities (e.g., text in African American English, text discussing LGBTQ+ identities)”. In reproducing socially dominant language varieties, foundation models may require speakers of minoritized varieties to accommodate to dominant varieties in educational contexts, incurring higher costs for these speakers and denying them the legitimacy and use of their varieties [Baker-Bell, 2020]. One setting in which such harms are likely to arise is the use of foundation models for feedback on open-ended writing tasks, as the authors of the report propose. In other work in this space, automated essay scoring and feedback provision have been shown to have roots in racialized histories of writing assessments that are difficult for data-driven technologies trained on such rubrics to avoid [Dixon-Román, 2019], and automated approaches to writing scoring and feedback may induce students to adopt writing styles that mirror dominant cultures [Mayfield, 2019].

In this way, foundation models may reproduce harmful ideologies about what is valuable for students to know and how students should be taught, including ideologies about the legitimacy and appropriateness of minoritized language varieties. Given the broader risks of homogenization of foundation models, they may amplify these ideologies at scale.

Risks of limited roles of stakeholders in designing foundation models

In education, decisions about educational curricula and pedagogy are often made with sustained critical evaluation and public debates about what to teach and how to teach it (cf. Scribner, 2016). However, by relying on foundation models for broad swaths of education (as Malik et al. propose), decisions about what is to be taught and how students should be taught may be made without the involvement of teachers or other educational stakeholders. Despite claims elsewhere in the Stanford report for foundation models to support human-in-the-loop paradigms (cf. section 2.4), the pre-trained paradigm of foundation models will likely entail limited opportunities for educational stakeholders to participate in key upstream design decisions, limiting their involvement to the use of models once such models are trained or fine-tuned. As with AI more generally, despite rhetoric about the importance of stakeholder participation in designing AI systems [Kulynych et al., 2020], the reality of current industrial paradigms of training foundation models on massive datasets requiring massive (and expensive) compute power may limit stakeholders’ ability to meaningfully shape choices about tasks, datasets, and model evaluation.

This narrow scope for involvement of key stakeholders such as teachers and students is at odds with participatory, learner-centered paradigms from educational philosophy (e.g., Freire, 1996; Broughan and Prinsloo, 2020) and the learning sciences [DiSalvo et al., 2017], where learners’ interests and needs shape teachers’ choices about what and how to teach. In addition, this may have the effect of further disempowering teachers from having meaningful agency over choices about content or pedagogy, further contributing to the deskilling of the teaching profession, in ways seen in earlier technologies of scale in education (cf. Cuban, 1986; Watters, 2021).

Risks of totalizing visions of foundation models in education

All of this raises concerns about the expansive claims made for the application of foundation models in education to “understand” students, educators, learning, teaching, and “subject matter.” The list of potential uses of foundation models in education claimed by Malik et al. is evocative of the totalizing rhetoric popular in computer science more generally, such as for software to “eat the world” [Andreessen, 2011]. Crucially, this totalizing vision suggests that everything that matters to learning can be rendered into grist for foundation models.

First, we note that in the education section, “understanding” is used to refer to at least three distinct phenomena: students’ “understanding” of subject matter, foundation models’ “understanding” of subject matter, and foundation models’ “understanding” of pedagogy and student behavior. But as Bender and Koller, 2020 have argued, NLP systems (including foundation models) do not have the capability to “understand” language or human behavior. Although the pattern matching that underlies foundation models (for NLP or otherwise) may produce outputs that resemble human understanding, this rhetorical slippage is particularly harmful in educational contexts. Because supporting learners’ conceptual understanding is one primary goal of education, the conflation of models’ representational ability with comprehension of subject matter, as well as the conflation of students’ development of conceptual or procedural knowledge with foundation models’ pattern-matching capabilities, may lead teachers and other educational stakeholders to trust the capabilities of foundation models in education when such trust is not warranted.

More generally, the paradigm of foundation models as laid out by Malik et al. requires that teaching and learning be formalized in ways that are legible to foundation models, without interrogating the potential risks of formalizing teaching and learning in such a way, nor the risks for fundamental aspects of education to be discarded if they do not fit into this paradigm. In other words, what forms of knowledge are privileged by rendering them tractable to foundation models? What is lost in such a partial, reductive vision of teaching and learning?

As one example, foundation models may be able to reproduce patterns of pedagogical decisions in the training corpora, but those datasets or models may not be able to capture why those decisions were made. For instance, good teachers draw on a wealth of contextual information about their students’ lives, motivations, and interests; information which may not be legible to foundation models. In some cases, the response from AI researchers may be to simply collect more data traces on students in order to make these aspects of students’ lives legible to modeling; however, this “rapacious”3 approach to data collection is likely to harm students through the ever-increasing surveillance of students [Galligan, 2020; Barrett, 2021].

Despite expansive claims for the potential for foundation models to radically transform teaching and learning in ways that benefit learners, the history of educational technologies suggests that we should approach such claims with a critical eye. In part, the proposed application of foundation models for education brings with it risks for reproducing and amplifying the existing inequitable status quo in education, as well as risks of reproducing dominant cultural ideologies about teaching and learning, in ways that may be harmful for minoritized learners. In addition, the properties of the foundation model paradigm that lend it its appeal 一 large-scale, pre-trained models adaptable to downstream tasks 一 are precisely what would likely limit opportunities for meaningful participation of teachers, students, and other education stakeholders in key decisions about their design. Education is a fundamentally public good; rather than centralizing power in the hands of institutions with sufficient resources to develop large-scale models, educational technologies, if they are to be designed, should be designed in ways that afford more public participation and are responsive to the needs and values of local contexts.

Footnotes

  1. Here we use “homogenization” to refer to the homogenization of outcomes emerging from the use of foundation models, as used in Section 5 of the report (as opposed to the homogenization of models and approaches, as used in Section 1). 

  2. Indeed, Malik et al., propose using lecture videos from online courses to train instructional models. 

  3. https://mobile.twitter.com/hypervisible/status/1442473891381710858 


References

Andreessen, M. Why software is eating the world. Wall Street Journal, 20(2011), C2. 2011.

Baker-Bell, A. Linguistic justice: Black language, literacy, identity, and pedagogy. Routledge. 2020.

Barrett, L. Rejecting Test Surveillance in Higher Education. Available at SSRN 3871423. 2021.

Bender, E. M., Gebru, T., et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT). 2021.

Bender, E. M. and Koller, A. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 2020.

Bommasani, R. et al. On the Opportunities and Risks of Foundation Models. arXiv. 2021.

Broughan, C., and Prinsloo, P. (Re)centring students in learning analytics: in conversation with Paulo Freire. Assessment & Evaluation in Higher Education, 45(4), 617-628. 2020.

Cahn, A. F., et al. Snooping Where We Sleep: The Invasiveness and Bias of Remote Proctoring Services. Surveillance Technology Oversight Project. 2020.

Collins, S., et al. The Privacy and Equity Implications of Using Self-Harm Monitoring Technologies. Future of Privacy Forum. 2021.

Cuban, L. Teachers and Machines: The Classroom of Technology Since 1920. Teachers College Press. 1986.

DiSalvo, B., et al.. Participatory design for learning (pp. 3-6). Routledge. 2017.

Dixon-Román, E, Philip-Nichols, T., and Nyame-Mensah, A. The racializing forces of/in AI educational technologies. Learning, Media and Technology, 45(3), 236-250. 2019.

Dodge, J.. et al. Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2021.

Field, H. At Stanford’s “foundation models” workshop, large language model debate resurfaces. Emerging Tech Brew. 2021.

Ford, D., et al. Paradise Unplugged: Identifying Barriers for Female Participation on Stack Overflow. Proceedings of the ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE). 2016.

Freire, P. Pedagogy of the oppressed (revised). New York: Continuum. 1996.

Galligan, C., et al. Cameras in the classroom: Facial recognition technology in schools. University of Michigan Science, Technology, and Public Policy. 2020.

Gerlach, M. and Font-Clos, F. A standardized Project Gutenberg corpus for statistical analysis of natural language and quantitative linguistics. Entropy, 22(1), 126. 2020.

Hanna, A. and Park, T. M. Against scale: Provocations and resistances to scale thinking. arXiv. 2020.

Harwell, D. “Cheating-detection companies made millions during the pandemic. Now students are fighting back.” Washington Post. 2020.

Holmes, W., et al. Ethics of AI in education: towards a community-wide framework. International Journal of Artificial Intelligence in Education, pp.1-23. 2021.

Keierleber, M. Exclusive Data: An Inside Look at the Spy Tech That Followed Kids Home for Remote Learning–and Now Won’t Leave. The 74 Million. 2021.

Kizilcec, R. F. and Lee, H. (forthcoming). Algorithmic Fairness in Education. In W. Holmes & K. Porayska-Pomsta (eds.), ​Ethics in Artificial Intelligence in Education​, Taylor & Francis.

Kizilcec, R. F., et al. Closing global achievement gaps in MOOCs. Science, 355(6322), 251-252. 2017.

Koedinger, K., et al. Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. In Proceedings of the Second ACM Conference on Learning @ Scale. 2015.

Kulynych, B., et al. “Participatory approaches to machine learning”. International Conference on Machine Learning Workshop. 2020.

Madaio, M., et al. (forthcoming) Beyond “Fairness:” Structural (In)justice Lenses on AI for Education. In W. Holmes & K. Porayska-Pomsta (eds.), ​Ethics in Artificial Intelligence in Education​, Taylor & Francis.

Mayfield, E. Individual Fairness in Automated Essay Scoring. Proceedings of the Workshop on Contestability in Algorithmic Systems, ACM Conference on Computer-Supported Collaborative Work (CSCW). 2019.

Ogan, A. Reframing classroom sensing: Promise and peril. Interactions, 26(6), 26-32. 2019.

Pappano, L. The Year of the MOOC. The New York Times, 2(12), 2012.

Scribner, C. F. The fight for local control. Cornell University Press. 2016.

Swauger, S. Our bodies encoded: Algorithmic test proctoring in higher education. Critical Digital Pedagogy. 2020.

Watters, A. Teaching machines: The history of personalized learning. MIT Press. 2021.