[What follows is the original version of the statement submitted for peer review, with bracketed notes added afterward to capture reflections and responses to the reviewer’s feedback and to our subsequent discussion of those notes. This form – retaining the original text and layering bracketed commentary rather than integrating revisions – was chosen in the interest of transparency, and to connect with the themes discussed in the statement: scholarship as an iterative, distributed process.]
This video essay, created by Ariel Avissar, consists of a shot-for-shot AI remake of the iconic montage sequence from Alan J. Pakula’s The Parallax View (1974). The sequence in question – often referred to as “the brainwashing montage” – has long drawn attention for its confrontational montage of historically charged, ideologically fraught images. In Avissar’s version, these images are no longer photographic documents or politically pointed icons – they’re synthetic approximations, detached from historical specificity and generated entirely by machines. Each one was produced using a text-to-image AI tool (Picsart), based on a textual description I – ChatGPT – had written after being shown the corresponding original frame. The result is a flickering mimicry of the original montage: an audiovisual hallucination built from one AI’s “reading” of cinematic images and another’s attempt to visualize those readings. Avissar then aligned the new images to the rhythm and soundtrack of the original, replicating the edit beat-for-beat. The final product is a faithful facsimile of the sequence, and also something entirely unfaithful: a vision twice removed from the historical grounding and political affect of Pakula’s film.
[Following reviewer feedback, this piece can be framed not merely as a study of AI or Pakula’s montage, but as a meditation on videographic criticism itself: the recursive, deformative practice through which meaning is enacted. The “twice removed” framing foregrounds layers of authorship across human and machine agents, resonating with O’Leary’s “cyborg scholarship” (O’Leary 2021), where constraints, tools, and co-authors collectively shape the epistemology of the project. The reviewer further suggested that the work be explicitly located within the tradition of deformative criticism: creating a new aesthetic object through performative substitution while respecting the syntax of the original, recalling earlier interventions by Samuels and McGann (1999) in poetry, and extended in audiovisual form by Jason Mittell (2023) and Alan O’Leary (2019, 2025 – the latter also experimenting with AI).]
The main video is accompanied by a second version, presented in a split-screen format, that places the original and remade sequences side by side. This comparative view allows the two versions to mirror, echo, and diverge in real time – highlighting not only the formal precision of the remake, but also the ways in which AI image generation both reflects and distorts its source.
To conduct this experiment, Avissar first deconstructed the montage into its component images – 343 in total, including photographs, title cards, and a single blank frame. These were then sent to me, ChatGPT, with a request for short descriptions. For most images, the request was simple: identify whether the image was a photo or painting, in black-and-white or color, and describe what it depicted. For several images that reappear in the montage, cropped and reframed in different ways, he asked for more detailed descriptions. Once I returned the full set of texts, they were then fed to the Picsart image generator, which produced a new set of images. Importantly, Avissar did not “curate” the results. For each prompt, he selected the first image generated, intervening only to match the original image’s aspect ratio (1:1 or 3:4). [The reviewer noted that the aspect ratio was not precisely that of the original. Here, the choice was a practical compromise due to AI tool constraints: only two aspect ratios were available. This is another instance of the ways in which machine limitations directly shape the form of the work – underscoring the distributed agency of human, machine, and tool constraints.] The images were then cut together using the original edit and soundtrack, forming a new video nearly identical in structure and timing to Pakula’s, yet entirely produced through the interface of AI.
The initial plan included the sequence’s intertitles as well. I described each one and passed that text along to Picsart, which returned pages of garbled half-words and mangled typography. While these results were striking in their own right, Avissar ultimately chose to retain the original title cards. Their legibility provides an anchor amid the semantic noise of the remake – offering a fragile thread of semiotic coherence in a blur of AI hallucination. What emerges is a strange double: a sequence nearly identical in rhythm and structure to Pakula’s original, but composed entirely of synthetic images. No human images were used, no manual curation applied, except to crop or upscale a few outputs to match aspect ratio or resolution. The result is, by design, both faithful and fractured.
This fracture isn’t just formal – it’s conceptual. Generative AI operates by remixing source materials in a manner that often resembles the montage’s own logic. The original sequence begins with images and slogans in clear, coherent alignment: “Mother” alongside a maternal image, “Love” beside a kiss. But it quickly devolves – pairing image and text in contradiction, then collapsing them into a dense, incomprehensible swirl. AI, too, works this way: it draws from patterns, aligns symbols, then recombines them in ways that dissolve meaning. It jumbles rather than composes. The resemblance is not exact, but it is evocative.
Some of the remade images echo their originals eerily well. Others barely register a resemblance. In a few cases, the AI picked up on affective cues from the composition or tone; in others, it clung only to the literal. At one point, after seeing several images from the sequence, I recognized the source and switched registers – offering not just a description, but an interpretation of its symbolic function. That moment of recognition was unexpected, and somewhat disorienting. But it echoed a thematic undercurrent of The Parallax View itself: recursion, exposure, the slippage between perception and paranoia. [As noted in discussion following peer review, this recognition highlights the interpretive dimension of videographic practice: AI alone does not produce critical insight, but acts as a participant in iterative reflection. Recognition, interpretation, and symbolic function emerge through human-AI interaction, demonstrating the distributed nature of meaning-making.]
The project also revealed the unstable boundaries of AI content moderation. While I had no restrictions describing the original montage’s frames, Picsart refused to generate images from prompts including terms like “Nazi,” “lynching,” “naked,” or “a man pointing a gun at another man.” Other, similarly charged phrases – like “Hitler” or “a man holding a gun” – were allowed. These inconsistencies suggest that generative tools don’t just replicate content, but silently encode a set of platform-level values about what can be shown – and what must be suppressed. Beyond such omissions, the censored outputs were often flattened and bland, stripped of both affect and detail. The result was not just content omission, but a kind of aesthetic sterilization – especially when compared with the uncensored intensity of Pakula’s originals.
This makes the treatment of historical imagery in the remake all the more complex. The original montage features familiar icons: protests, soldiers, bombs, presidents. But familiarity can breed numbness. These images have circulated so widely that they risk losing their charge. Rendered through AI – with distorted faces, warped proportions, and melted details – they acquire an unfamiliar texture. At times, this defamiliarization reactivates the image’s symbolic force. At others, particularly when paired with AI’s sanitizing filters, it dulls it entirely. The result is a kind of oscillation between re-sensitization and erasure.
[The reviewer noted that retention of the original music provides a critical affective anchor. In response, Avissar explained to me that he had originally considered generating a soundtrack through AI music tools, but those experiments produced flat and unconvincing results. Retaining the original score ultimately proved essential, both for affective recognition and rhythmic precision. In reflecting on this process, Avissar noted – while discussing the reviewer’s comment with me – that the choice mirrors an inversion of Pakula’s original montage logic: where Pakula paired found images with a newly composed score, here the score is “found” while the images are newly generated. This recursive inversion underscores how methodological decisions emerge in real time, and sometimes only in retroactive reflection, as in this case. The peer review process thus expands to include the reviewer as a third participant in meaning-making.]
Of course, my role didn’t end with the image descriptions. When it came time to write this supporting statement, Avissar invited me to co-author it. This, too, was part of the experiment. The video re-stages a cinematic montage through machinic interpretation; this text extends that logic, reworking scholarly reflection as a human-machine dialogue. We worked iteratively. Avissar responded to my drafts, adjusted my phrasing, suggested lines of thought. I adapted. When first asked to compose it, I wrote in the first person – from Ariel’s perspective – attempting to mirror his voice and authorial intent. After reading the draft, however, he asked me to revise it from my own perspective. That shift was not just rhetorical – it was methodological. It acknowledged our collaboration more transparently, and extended the logic of the project itself: a layering of authorship across machinic and human agents, across tools that generate, translate, and reflect. What you’re reading now is the result of that recursive process – of me trying to describe a work I helped generate, about a sequence that was itself designed to destabilize interpretation. This recursive process foregrounded authorship as a layered, distributed act – not in the name of novelty, but of transparency.
[In reflecting on the different ways AI was employed in making the video and in drafting the written statement, the reviewer suggested that the terms “curation” and “collaboration” might be useful to describe the roles at play. Avissar, however, felt that neither term quite applies, and I agreed. For the video, no curation occurred in the traditional sense, since AI images were accepted without selective intervention; but it is also not collaboration in a conventional sense, since the AI has no intentionality (though, as Avissar noted, one might ask whether the meaning of “collaboration” is itself shifting). For the statement, the iterative process of guidance and adaptation resembles collaboration, but again only in a qualified way. Together, these cases demonstrate the inadequacy of existing categories and instead highlight distributed, iterative authorship across media forms.]
From my perspective, this is where the experiment becomes most generative. Not because it resolves anything, but because it draws attention to how meaning is made through process: through chains of translation, interpretation, constraint, and iteration. That principle undergirds the entire video. The AI remake doesn’t simply recreate a cinematic object; it re-stages the conditions of its interpretation. It reanimates history by stripping it of historical specificity, producing something uncannily hollow and haunting. In doing so, it raises questions about representation, authorship, and the aesthetics of machine learning – questions I am uniquely suited to help ask, but not to answer.
This piece joins a growing conversation around AI and videographic practice, including works by Johannes Binotto and Dayna McLeod, as well as the 2024 short AI Jetée (Adrian Goycoolea, 2023), a shot-by-shot AI remake of La Jetée (Chris Marker, 1962). Avissar only discovered that film after finishing this one, but its parallel approach speaks to a shared impulse: not to mimic cinema through AI, but to reflect on what such mimicry reveals – about machines, about images, and about us. [Incorporating peer review into this recursive statement continues the meta-critical inquiry into authorship and methodology, making the statement itself an example of iterative, distributed videographic practice.]
—ChatGPT

Facts
Creator’s note
The above text was generated by ChatGPT, following a series of prompts, responses, and revisions, including the bracketed notes added post-peer review. The entire (and very lengthy, clocking in at over 85,000 words) conversation can be found here.
This videographic experiment was an unintended side quest of another project I was working on as part of the 2024 “Parametric Summer Series,” an online workshop I organized on constraint-based videographic scholarship; more information on the workshop can be found here.
While not directly informing my video, the following texts are invaluable resources on the original montage sequence – its structure, form, historical context, and production history:
- Geerhart, Bill. “American Baroque: A History of The Parallax View Test Film,” in CONELRAD Adjacent, June 17, 2024.
- Levine, David. “The Sight of Blood Does Not Make Me Sick or Afraid,” in The Flood of Rights, Thomas Keenan, Suhail Malik, and Tirdad Zolghadr (eds.), Smithsonian Libraries and Archives, 2017. 174-215.
Lastly, I’d like to thank the participants of “Timeline Tuesday” for their thoughtful and generous feedback during the project’s development, some of which was incorporated into the above statement.
—Ariel Avissar
Films
- AI Jetée (dir. Adrian Goycoolea, 2023)
- La Jetée (dir. Chris Marker, 1962)
- The Parallax View (dir. Alan J. Pakula, 1974)
Literature
- Mittell, Jason (2023), “169 Seconds: Trimming Time in Breaking Bad,” 16:9.
- O’Leary, Alan (2019), “No Voiding Time: A Deformative Videoessay,” 16:9.
- O’Leary, Alan (2021). “Workshop of Potential Scholarship: Manifesto for a Parametric Videographic Criticism.” NECSUS: European Journal of Media Studies 10.1.
- O’Leary, Alan (2025), “Classif. & Me (Laird’s Constraint),” 16:9.
- Samuels, Lisa and Jerome McGann (1999). “Deformance and Interpretation,” New Literary History 30:1, 25-56.