In this project, we are working to move beyond traditional data representation by integrating Stream Diffusion, a machine learning platform at the forefront of Generative AI Art. This involves using a diffusion pipeline that augments real-time images with machine learning, informed by audience prompts about their experiences of marginalization by technology as well as their observations/ fantasies about how our technologically augmented lives might look in the future. 

However, this technological exploration is not without its failures. We are also probing the potential misuses of this technology which have already become ubiquitous even in its nascent stages. For instance, in the realm of pornography, distinctions between authorship, identity, and exploitation can often become blurred. By engaging with these issues, we aim to better understand the potentially nefarious applications of emerging technology. This understanding will enable us to design more humane interventions for its use in artmaking and beyond.

Participants will be immersed in an experience that not only clarifies the technology behind the interfaces but also prompts a deeper contemplation of our place within a hyperreal world. The interface, acting as a kind of mirror room, invites reflection on our contemporary reality. In line with digital artist Martina Menegon’s thoughts, “interactive, virtual, and extended self-portraits offer a chance for liberation, challenging us to rethink our identities and our relationships with our virtual selves amidst the fluidity of contemporary existence”.

This project builds on a body of work that Katherine Helen Fisher has been developing over recent years, in which she/they  has been collaboratively developing networks for real-time choreographic interfaces intended for use in both live and virtual performances. These interfaces are built within TouchDesigner—a visual programming language for real-time interactive multimedia content. They embody what the Harvard MetaLab describes as “liveness plus,” suggesting their role as a tool for reimagining performance spaces for our hyperconnected world. By using sensors to capture motion data from the human body, the interfaces transform this data into generative visual imagery, often through using extended photographic forms. In this way they prompt reflection on existence within a world saturated with mediated images and omnipresent screens.

Designed to foster open interaction, these interfaces encourage participants to engage with the technology through somatic play, in essence using their own bodies and faces to “puppet” the technology thereby claiming some autonomy over the media. This not only deepens the connection to one’s physical presence but also projects one’s image in an amplified, larger-than-life manner. The aim is to bridge the gap between tangible experiences and virtual representations, offering insights into our digitized lives.

Credits:

  • Character Development, Performance:
    Jae Neal
  • Movement Direction and Performance Concept: 
    Jae Neal with Katherine Helen Fisher
  • Realtime Choreographic Interface Creative team:
    Katherine Helen Fisher, Shimmy Boyle, Minyong Cheng with Safety Third Productions

Image credits: Data Fluencies Theater Project with key collaborators Katherine Helen Fisher, Jae Neil, Shimmy Boyle and Mingyong Cheng with Safety Third