In this project, we advance beyond traditional data representation by utilizing Stream Diffusion, a cutting-edge machine learning platform in Generative AI Art. This platform enhances real-time images with machine learning, influenced by audience prompts about future technological fantasies. By celebrating technology in a body-positive way, participants interact deeply, reflecting on hyperreality and identity through virtual self-portraits.

We also address the potential misuse of this technology, particularly examining the role of the violent viewer. In adult content, lines between authorship, identity, and exploitation can blur. By engaging with these issues, we aim to understand and design more humane interventions for artmaking and beyond.

Participants will immerse in an experience highlighting inherent biases in technology, prompting contemplation of our place in a hyperreal world. The interface acts like a mirror room, inviting reflection on contemporary reality. Digital artist Martina Menegon suggests that “interactive, virtual, and extended self-portraits offer a chance for liberation, challenging us to rethink our identities and relationships with our virtual selves.”

Built within TouchDesigner, a visual programming language for real-time multimedia content, these interfaces embody “liveness plus,” as described by Harvard MetaLab. They capture motion data from the human body, transforming it into generative visual imagery. This promotes reflection on our existence within a world saturated with mediated images and screens.

Designed for open interaction, these interfaces encourage participants to engage with technology through somatic play, using their bodies and faces to “puppet” the technology. This deepens the connection to one’s physical presence and projects an amplified image, bridging tangible experiences and virtual representations, and offering insights into our digitized lives.

Video by Katherine Helen Fisher

Credits:

  • Creative direction: Katherine Helen Fisher
  • Character Development, Performance: Jae Neal
  • Movement Direction and Performance Concept: Jae Neal with Katherine Helen Fisher
  • Realtime Choreographic Interface directed by Katherine Helen Fisher in collaboration with creative technologists Shimmy Boyle, Minyong Cheng with Safety Third Productions
  • Sound composition: Kite, Oga Li (assistant)
  • Physical station design: David Mesiha
  • Developed within the Data Fluencies Theatre Project