Envisioners press buttons to inform, in the strictest sense of that word, namely, to make something improbable out of possibilities. They press buttons to seduce the automatic apparatus into making something that is improbable within its program. They press but­tons to coax improbable things from the whirring particle universe that the apparatus is calculating. And this improbable world of envisioning power surrounds the whirring particle universe like a skin, giving it a meaning. The power to envision is the power that sets out to make concrete sense of the abstract and absurd universe into which we are falling.

Willem Flusser, Into the Universe of Technical Images

Figure 1. Alan Warburton, RGBFAQ, 2020. Video. Still.

I want to take this opportunity both to reflect on the epistemological questions that often accompany discourses on AI and animation, and to comment on a work that has deeply influenced the way I think about the topic and that has been enormously useful for my students as well. Alan Warburton’s video essay, RGBFAQ (2020), provides a historical overview of the “computational image,” which leads us from its development in the high-stakes military simulations of the second world war through the multiplication of render elements in computer graphics and the explosion of minable data in smartphone photos to deposit us in a strange loop where machine learning algorithms designed to operate on the “real world” are trained on synthetic datasets. All the while, Warburton explores the cultural transformations attendant to these expanding technical and aesthetic modes. I won’t describe it or relate too many of its details because you can see it for yourself here. In what follows I will use the way it configures relations between visual images and computational procedures to posit animation’s challenge to the logic of the black box.

One of the central questions animation scholars and practitioners pose about computer animation focuses on the way that render algorithms, presets, and what Elizaveta Shneyderman calls “grab-and-go” objects might constrain and normativize production, spectatorship, and cultural forms and experiences more broadly.[1] (None of these procedures necessarily use AI or machine learning, but where they do the problematic is only intensified) This inquiry into current and future practices in animation resonates with work by media theorists whose concern is the inaccessibility of computational processes to human sensory perception and the accompanying discorrelation of the image from the scale and scope of the human body.[2] There is a sense, one I share, that much more than we might have imagined takes place beyond the ken of human perception and intervention. It may be that at least for those of us living in, or in the wake of, the high-tech networked megalopolis, that this is part and parcel of the structure of feeling of the twenty-first century.

Figure 2. Alan Warburton, RGBFAQ, 2020. Video. Still.

Interestingly, these questions frame Warburton’s essay as well, which begins by suggesting that the virtual world began in the black box of early computer simulations and ends with the question: Are we still outside the box? I was particularly struck by the question because of its context: the work itself is so, well, illuminating. The video essay is gorgeously animated, and the way Warburton presents the layers—or render elements—that constitute what he calls the “exploded image” of computer graphics and the multiplicity of data visions that can be pulled out of any cell phone selfie, are compelling and lucid renderings of the very algorithmic processes whose effects we see but whose procedures are not visible to us, at least not in conventional terms. Like a simulated x-ray or fMRI, RGBFAQ envisions the computational dimensions of the computational image through their visual animation, enabling spectators to explore, as Tobias Revell remarks, “complex entanglements that are often very difficult to grasp in our everyday experience.” As well as framing the narrative, the black box is a central visual motif of the work, appearing numerous times including at the beginning and ends of its chapters. Warburton’s narrative suggests that while the black box was initially on the computational side of the computational image, that it is in fact images “proper” that like pied pipers ultimately draw everything—AI, hardware and software, biological bodies, mathematical formulations, perceptual operations—dancing into its maw.

Figure 3. Alan Warburton, RGBFAQ, 2020. Video. Still.

We could imagine this scenario of no longer being outside the box as one in which we all live our lives in a simulation that is inside a computer in the manner of the computer-graphical architecture of Tron. But we can think of it in another way. As I argued in The Animatic Apparatus,[3] the historical conditions of the contemporary digital mediasphere intensify what has perhaps always been true of animation, that it is a species of simulation rather more than it is a species of representation (though the latter is no doubt a feature of particular instantiations). Animation, as RGBFAQ demonstrates, upends the logic that would distinguish the “virtual” (in the sense of virtual world) and the “real.” There is a sense, for example, in which figurative, representational paintings have been viewed as windows, openings through which to view a world. But I don’t think we have ever viewed moving image animation in quite this way: an aspect of otherworldliness—or other worldedness—is always in play. That is, animation does not function in the mode of a dialectic relation between transparency and opacity as a window does. This does not make it less able to render and produce experience in a phenomenologically direct way, as Warburton shows, it only displaces truth-as-transparency as an objective and ground for judgment.

I do not mean to suggest that there are no such things as black-boxed models for AI or machine learning datasets—or no such thing as transparent ones for that matter—as these are the disciplinary names for procedures of conceptualizing and implementing algorithms.[4] I very much share the concerns of the scholars mentioned above who see the rise of algorithmic image processes and AI animation as a challenge to both theorists and practitioners precisely because of the significant ways that these have changed our phenomenological relationship to images. What I wonder at is addressing these issues in relation to questions of epistemic accessibility and inaccessibility, visibility and invisibility, and so on. From religious practices or ethnography to mathematics and physics,[5] direct, transparent visual or sensory access has not always determined epistemological or ontological access. Whether we are thinking about deities or equations or parallel worlds or the question of whether the sublime is inside or outside of art, we can see a struggle to fit potential outliers into the terms of the Enlightenment knowledge project. My own sense of opaque and threatening forces is due at least as much to the Anthropocene’s revelation of the failure of this project as it is to the speeds and scales of computational protocols. What I am pointing to here—and I think this an important feature of Warburton’s work as an animator—is that animation provides ways of addressing and understanding our relationship to our contemporary pluriversal and planetary condition that points away from conventional relations of knowledge and toward what Vilem Flusser describes in the epigraph above, as a “power that sets out to make concrete sense of the abstract and absurd universe into which we are falling.”[6] That is, to a particular capacity, and phenomenotechnique, for envisioning and world-making.


Deborah Levitt is Assistant Professor of Culture and Media Studies at The New School. She is the author of The Animatic Apparatus: Animation, Vitality, and the Futures of the Image(Zero Books, 2018) and co-editor of Acting and Performance in Moving Image Culture: Bodies, Screens, and Renderings (transcript, 2012). Her current book project, Rendering Worlds, investigates how media’s new perceptual infrastructures can become vehicles for a pluralist political imaginary. 


[1] For a fascinating examination of these issues, cf. Elizaveta Shneyderman, “Parameterization: On Animation and Future Corporealities,” Animation Studies: The Peer-reviewed Open Access Online Journal for Animation History and Theory – ISSN 1930-1928: 14-06-2021.

[2]  This could be a long list. I recommend especially Mark Hansen, Feed-Forward: The Future of Twenty-First Century Media (University of Chicago Press, 2014), Shane Denson, Discorrelated Images (Duke University Press, 2020), and James J. Hodge, Sensations of History: Animation and New Media Art (Minnesota University Press, 2019.

[3] Deborah Levitt, The Animatic Apparatus: Animation, Vitality, and the Futures of the Image (Zero Books, 2018).

[4] Cf. Rudin, C., & Radin, J. (2019). Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition. Harvard Data Science Review1(2). https://doi.org/10.1162/99608f92.5a8a3a3d, for an interesting discussion of these issues among data scientists.

[5] This has of course been an important question in the history of science. Gaston Bachelard’s concept of “phenomenotechnique” provides one of the most interesting and generative means to address it. Cf. Bachelard, The New Scientific Spirit, trans. Arthur Goldhammer (Beacon Press, 1986) and The Formation of the Scientific Mind (Clinamen Press, 2002).

[6] Vilem Flusser, Into the Universe of Technical Images, Trans. Nancy Ann Roth, Intro. Mark Poster (University of Minnesota Press, 2011), 37.