Under the Machine’s Gaze is an interactive AI-generated video artwork that uses Artificial Intelligence as a creative meta-tool to explore and problematize the impact of computational systems (specifically AI) as oppressive technologies to identify, classify, and control the human body in society. This work is a visual poetry about formation, deformation, fragmentation, and defragmentation of our data bodies and identities through the eye of the AI. It uses Natural Language (text prompts), bodily language (dance gesture), and text to image generative algorithms (Stable Diffusion Models, Natural Language Processing, and LLMs) to poetically and performatively visualize the frustration of the reduced, misrepresented, objectified body under the hegemonic gaze of the techno-social eye. 

This work strives to provoke a dialogue about data colonialism and surveillance capitalism in the context of the societal applications (benefits and exploitations) of AI technologies, Large Language Models, and analytical systems used for mining human private data. Using the language of art, poetry, and performance, it brings attention to new modes and modalities of control and surveillance enabled by these powerful emerging technologies. The objective of this installation artwork is to illustrate the fluidity of motions and emotions of a body under the “Machine’s Gaze,” fluctuating between the human/animal, machinic, corporeal, and data bodies, and the tension in the transitional states in between: a body in a constant state of becoming and unbecoming.

This interactive video artwork was developed by Sahar Sajadieh, based on a dance performance video by Jae Neal and Katherine Helen Fisher, which had utilized an interactive choreographic interface. Written comments of the Data Fluencies Theatre Project team members as a response to relevant questions about the project’s themes were collected and used for creating the visuals. A Stable Diffusion-based computational platform for affective storytelling with generative AI, developed by Sahar Sajadieh, was applied to transform each video frame into a painting using text to image synthesis.


  • Video artist, interaction designer, and generative AI developer: Sahar Sajadieh
  • Performer: Jae Neal 
  • Creative direction of the microfilm and real-time interface: Katherine Helen Fisher 
  • Sound composition: Kite
  • Prompts contributions by: Sahar Sajadieh, Ioana Jucan, David Mesiha, Enongo Lumumba-Kasongo, Gavan Cheema, Katherine Helen Fisher, Tushar Mathew, Jae Neal
  • Microfilm concept: Jae Neal with Katherine Helen Fisher
  • Interactive choreographic interface for the original dance piece: Mingyong Cheng, Shimmy Boyle with Safety Third
  • Microfilm shot at SV Studios in Los Angeles, California
  • Developed as part of the Data Fluencies Theatre Project.