André Gaudreault and His Team Present Their Research on AI in Paris

Lire en français

The co-director of cinEXmedia took part, alongside researchers from the Laboratoire CinéMédias, in the international conference "Couper/générer. Le montage à l’épreuve de l’IA" ("Cut/Generate: Editing in the Age of AI"), held on April 26th in Paris.

Photo: Roland Godefroy (Wikimedia Commons) | The Université Sorbonne Nouvelle, in Paris, France

The conference “Couper/générer. Le montage à l’épreuve de l’IA” (“Cut/Generate: Editing in the Age of AI”), held from April 24th to 26th, 2025, at the Maison de la Recherche of Sorbonne Nouvelle University in Paris, was organized by the Laboratoire International de Recherches en Arts and the Institut de recherche sur le cinéma et l'audiovisuel, in collaboration with the Institut Universitaire de France. The event brought together researchers, practitioners, and artists to examine the implications of artificial intelligence (AI) for the concept and practice of editing.

André Gaudreault, co-director of cinEXmedia and the Laboratoire CinéMédias, participated alongside research professional Marie-Odile Demay, postdoctoral fellow Anna Kolesnikov, and PhD candidate in film studies Tanzia Mobarak.

“Global Montage”

The presentation by André Gaudreault and Marie-Odile Demay, titled “L’IA à l’épreuve du montage : journal d’une experimentation” (“AI Put to the Test of Editing: Journal of an Experimentation”), traced the research team’s journey since they began exploring AI in April 2024. “Initially, we wanted to examine its impact on society as a whole,” explained André Gaudreault. “Then, we dove into the notions of editing and rhythm.”

A turning point came last September, when multidisciplinary artist Alain Omer Duranceau, a collaborator on the Laboratoire CinéMédias’ DÉMARRER project (2024 – BRDV, Université de Montréal), shared a short film he had created in just two days using artificial intelligence. Titled Neither Man Nor Movie Camera, the film was inspired by Dziga Vertov’s Man with a Movie Camera (1929). “We wanted to understand, using this film as an example, how the production chain of an audiovisual work could be altered by AI,” Professor Gaudreault said.

To this end, he and Marie-Odile Demay developed a “map” to “quantify the human contribution to AI-generated creative work,” he explained. The researchers found, for instance, that in Duranceau’s short film, the image editing had been done by the artist himself, since “the diffusion models were not capable of editing,” noted Marie-Odile Demay.

Photo: Pierre Moisan | From left to right: Marie-Odile Demay, André Gaudreault, Anna Kolesnikov and Tanzia Mobarak, at the conference “Couper/générer. Le montage à l'épreuve de l'AI.”

A diffusion model, primarily used for image generation (but also for sound and text in some cases), is a type of generative model based on a probabilistic process. It simulates how a dataset—such as an image—can be progressively “destroyed” by adding random noise, then reconstructed in reverse. The noise represents removed information, rendering the data increasingly blurry, distorted, or unrecognizable. Tools such as DALL·E 2, Stable Diffusion, and Midjourney are all based on this type of model.

Building on these findings, the researchers began a series of experiments with the help of Alain Omer Duranceau and Yann Guizaoui, a PhD candidate in media studies and specialist in human-machine interactions. “We decided to force the model to make cuts,” said Marie-Odile Demay. “We succeeded, but the results were inconclusive.”

The team began with proprietary systems, such as OpenAI’s Sora software, then turned to the open-source diffusion model from Hunyuan Video, developed by Chinese company Tencent. “The open-source format allowed us to observe the AI’s process and realize that the diffusion model, by its very nature, performs editing within its image generation process,” explained Marie-Odile Demay.

To qualify this editing carried out at the very source of image generation by diffusion models, the two researchers put forward the notion of “global” editing - in reference to the notion of “global image” originally developed by filmmaker and theorist Sergei Eisenstein. André Gaudreault even compares the emergence of AI-based filmmaking to the earliest days of cinema: “To me, the release of Sora—which lets you create videos from prompts—is a global first, on par with the Lumière brothers’ first public film screening at the Grand Café in 1895.”

Dziga Vertov: Between Digital Revolution and Postmodern Pastiche

Anna Kolesnikov and Tanzia Mobarak also drew inspiration from Duranceau’s short film but approached it from a different angle in their presentation, “Neither Man Nor Movie Camera: Vertovian Kino-Pravda Meets Alain Omer Duranceau’s Pastiche”.

The two researchers focused on Dziga Vertov, the director of the original 1929 film: “We wanted to focus on Vertov not only as the subject of this pastiche, but also as someone who, through his writings and cinematic practice, inspired contemporary understandings of the digital revolution,” said Anna Kolesnikov. “We wanted to highlight the potential of studying Vertov’s texts and works in current discussions on AI aesthetics.”

Their analysis of both films served as a springboard for a broader reflection on creative practice and the concept of postmodern pastiche in the era of artificial intelligence. “We compared Vertov’s original work and Duranceau’s interpretation with other AI-generated works inspired by Vertov, while examining the emergence of certain biases inherent in AI systems,” said Tanzia Mobarak.

The researchers aimed to observe how such biases, or prejudices, can emerge during prompt writing and how they are visually rendered by AI. “For instance, unlike the original film, Neither Man Nor Movie Camera emphasizes the camera rather than the editor figure. Also, women, who held prominent roles in Vertov’s film, disappear entirely in Duranceau’s version,” noted Anna Kolesnikov.

Together, these two presentations allowed André Gaudreault and his team to share the results of a year of collective research—each highlighting a distinct and original approach that bridges the early development of cinematic montage with experimental work using the latest AI-based diffusion models.