The Black-Boxed Ideology of AWE

Antonio Hamilton and Finola McMahon

An Ideology of Black-Boxing

Black-boxing is at the heart of many computer programs. The following is general description of black-boxing, according to Latour (1999):

An expression…that refers to the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become. (p. 304)

While Latour's work is well known in and outside Writing Studies, the key words in this passage are opaque and obscure, as they refer directly to machinic processes becoming settled, accepted, and overlooked. Writing studies scholar Francher (2021) develops a similar definition, writing "Blackboxing occurs any time complex technologies are used without questioning or understanding the design choices, thereby allowing the technology to be a mystery to the user" (par. 2). Both definitions highlight different aspects of black-boxing that we argue are operationalized in AWE. First, black-boxing is a structural force in Latour's conception; science and large-scale forces become successful and therefore do not need to explain themselves. Second, in Francher's definition, black-boxing is user centric, with design being foregrounded. With regards to AWE, black-boxing is both a centripetal and centrifugal force; it coerces users to think about technologies without critiquing them while the programmers can stand back and understand how the users are reacting to the program's force.

Black-boxing of this kind is thus highly ideological by means of opacity, a key concept elaborated upon by information scientist Jenna Burrell (2016). Burrell defines opacity as follows:

Opacity seems to be at the very heart of new concerns about ‘algorithms' among legal scholars and social scientists. The algorithms in question operate on data. Using this data as input, they produce an output; specifically, a classification (i.e. whether to give an applicant a loan, or whether to tag an email as spam). They are opaque in the sense that if one is a recipient of the output of the algorithm (the classification decision), rarely does one have any concrete sense of how or why a particular classification has been arrived at from inputs. Additionally, the inputs them- selves may be entirely unknown or known only partially. The question naturally arises, what are the reasons for this state of not knowing? Is it because the algorithm is proprietary? Because it is complex or highly technical? Or are there, perhaps, other reasons? (p. 1)

For our purposes in this chapter, Burrell goes on to identify two ways that opacity functions to obscure or obfuscate understandings of black-boxes. First, opacity occurs because of proprietary reasons. Burrell writes, "The opacity of algorithms, according to Pasquale [2015], could be attributed to willful self-protection by corporations in the name of competitive advantage, but this could also be a cover for a new form of concealing sidestepped regulations, the manipulation of consumers, and/or patterns of discrimination" (p. 4). Writing Studies researcher Kevin Brock (2019) too has noted this defensive posture of companies in his book around rhetoric in computer code: "[software] services are simply used by consumers as black boxes rather than distributed to them as standalone programs. That is, it is possible—and often profitable—to build fences and walls around the ‘free' software supposedly accessible to any interested party" (p. 86; emphasis in original). To extend Burrell and Brock, proprietary reasons could be simply a function of capitalist competition or, more ethically dubious, an attempt to skirt governmental regulations.

Second, opacity can occur through technical illiteracy, or the framing of "...writing (and reading) code and the design of algorithms is a specialized skill" (Burrell, 2016, p. 4). Others in Writing Studies have noted the functional and mechanical biases of algorithms (Brock and Shepherd, 2016, p. 22; Charlton, 2014; Johnson, 2020). For example, black-boxing occurs because programmers do not wish to explain their mathematical models to laypeople.

While Burrell (2016) explains that opacity can also occur because machines process information differently than human beings (p. 4-6), we focus on the proprietary and technical illiteracy reasons. Following the idea that "Burrell's argument helps Writing Studies researchers to demystify algorithms when they are black boxed" (Gallagher, 2020, p. 3), we aim to examine AWE and the constitutive components through a qualitative investigation. More specifically, we want to "unbox" the black-boxes of AWE to reveal how some of their inner workings function. As such we propose the following research question:

  • RQ1) What are the ways that AWE software companies black-box their software?

To answer this question, we set about analyzing the features, AI technology descriptions, and accessibility of these programs. Thus our second research question is as follows:

  • RQ2) What are the assessment features of AWE software and what are the commonalities and differences between them?

With these questions in mind, we began our process to potentially "unbox" and/or make the case for the "unboxing" of AWE programs that try to obfuscate their functionality for questionable reasons this chapter will explore. Understanding these moments of opacity within AWE programs will better help us to determine the place of automated writing software alongside or as an alternative to instructor-based writing assessment.