Invited to the Distinguished Lectures of the Canada Research Chair in Film and Media Studies, the scholar warned against the “political appropriation of the vectorization of culture.”
Olivier Du Ruisseau

On October 29, 2025, the cinEXmedia partnership welcomed Antonio Somaini—Professor of Film Theory, Media, and Visual Culture at Université Sorbonne Nouvelle and Senior Member of the Institut universitaire de France—to the Université de Montréal as part of the Distinguished Lectures of the Canada Research Chair in Film and Media Studies.
Building on reflections initiated during his curatorial work for the exhibition The World According to AI, presented from April to September at the Jeu de Paume in Paris, Professor Somaini highlighted the role of latent spaces in artists’ use of visual archives through artificial intelligence (AI), as well as their inherently political dimension.
To clarify, a latent space is a compressed and abstract representation of data constructed by AI algorithms. In other words, it is a digital space in which each piece of data (image, text, sound) is encoded as a vector of numbers. These vectors, generated through the learning process, organize information according to statistical similarities rather than explicit categories. For example, an AI trained on images may learn to identify objects, textures, or visual styles; these features are then encoded as positions within the latent space associated with each image.
“While preparing the exhibition, I asked myself: ‘What does it mean to experience the world through AI?’” explained Antonio Somaini. “What does it mean to recognize, imagine, interact with others, read, see, or listen in a world increasingly permeated by AI models, which act as filters transforming the way we perceive, think, and write?”
Risks of Ideological Bias
Indeed, artificial intelligence does not perceive data in the same way as human beings. It labels and encodes all kinds of information “according to algorithmic logics,” often without our knowing how, the researcher noted. As a result, a visual archive organized—or even generated—by AI produces a radically different relationship to the world than that of a traditional physical or digital archive.
For this reason, Professor Somaini issued a warning against the “political appropriation of the vectorization of culture.” Drawing on Michel Foucault—who, in The Archaeology of Knowledge (1969), defines the archive not as a simple collection of documents but as “the general system of the formation and transformation of statements”—he emphasized the potential biases of generative AI.
In the current context of growing geopolitical tensions and the strong industrial concentration of AI technologies—largely dominated by U.S.-based companies— “it is more urgent than ever to consider latent spaces as sites of power,” he stated. “The Trump administration has already begun removing certain words from official state documents, including the adverb ‘historically,’” he added. “This fits within a logic typical of authoritarian regimes: presenting a single, unified version of history.”
Institutional and Artistic Examples
As an example, the researcher mentioned the Founder’s Museum, a new museum associated with the White House and funded by American conservative lobbies, which opened last fall to mark the 250th anniversary of U.S. independence, scheduled for summer 2026. The institution offers a patriotic perspective on the country’s history and minimizes the contribution of African Americans. Its main exhibition featured AI-generated videos animating iconic images of figures such as John Adams and Thomas Jefferson.
These videos combine phrases historically associated with these figures—drawn from real archival sources—with fictional dialogue. The video featuring John Adams appeared particularly troubling to Somaini. In it, the second President of the United States declares, among other things, “Facts do not care about your feelings,” a phrase often used by Ben Shapiro, an American conservative commentator associated with PragerU, a major funder of the institution.
The researcher nevertheless concluded his lecture on a more inspiring note by presenting excerpts from four short films that demonstrate different ways generative AI can create new forms of visual archives or reveal how information is classified within latent spaces. This latter point was notably explored in What Do You See, YOLO9000? (2014) by Taller Estampa. Over images from iconic scenes in film classics such as Two or Three Things I Know About Her (Jean-Luc Godard, 1967) and Jeanne Dielman, 23 quai du Commerce, 1080 Bruxelles (Chantal Akerman, 1975), words identifying visible objects appear on screen, highlighting the contrast between AI’s perception of reality—encoded in latent spaces—and the lived experience of film by human viewers.
However, such automatic processes of artificial intelligence remain, for now, “limited by digital infrastructure” and therefore still rely on human-made structures, Somaini concluded: “The latent space represents a matrix of possibilities, but it remains to be seen how we will be able to frame it.”
