Algoritmi dell’Anima™


Algoritmi dell’Anima™ (Algorithms of the Soul™) is emergent music for cyborgs. It’s a chamber work in two versions: 1) 14 instruments (3 FL/1 CL/ 2 SX/ 2 VN/ 2 VLA/ 2 VC/ 2 CB), duration, 11 minutes and 2) standard string quartet (VN1/VN2/VLA/VC), duration 8 minutes 48 seconds. The work uses my mNotation™ live music notation generative software, and notation is delivered to the performers via WIFI on their laptops, tablets, or smartphones.


This work is particularly relevant to MUME because of its marriage of technology and idea in the domain of music, performance, and networked experience.

I call this work ‘emergent music for cyborgs’ because it might be instructive to beings who are only beginning to develop algorithmic processes that might simulate, at some level, human consciousness, imagination, and an emotional vocabulary we might conflate into a makeshift /placeholder definition of soul (anima), until time, reflection, and suffering permit a broader realization of the term.

Who’s a cyborg today? I think many of us are, at least poetically. I define a cyborg as some fusion of human and (info) technology, usually within the context of a post-human perspective (including perhaps some combo of post-race, post-gender, post-geopolitical, and post-cultural point[s] of view). Perhaps—when networked computing (i.e. the Internet) becomes self-aware, and human consciousness is transferable to digital and cybernetic organisms—music made by algorithm will be appreciated by these beings. Or not.

When the mNotation™ software was sent to a well-regarded contemporary music conductor and composer, his email response was ““I took a look at your algorithm… it looks fun to play with—not sure if it’s capable of generating much of an orientation reaction or subsequent categorization response with resultant emotion cascade.”

I realized he was probably right, and then decided mNotation™ would be a great learning-piece for cyborgs, to give them an experience of a nostalgic, modernist cloud of sound events grounded in structure and code, and evoking perhaps only fleeting and ambiguous emotional responses from humans. So, music produced by mNotation™ would invite the AI of cyborgs to consider, ponder, and reflect on the seemingly random yet meaningful ambivalence of the resultant sonic activity, and reflection is a key component in the maturation and development of one’s soul (as perhaps an archetypal psychologist like James Hillman would posit).


mNotation™ Software: – includes individual parts for all performers, and video of a previous performance with the software (2010).

BIOGRAPHY (150 words; other versions at )

The music of composer/media artist Joey Bargsten has been played by the Indianapolis Symphony and the St. Paul Chamber Orchestra, and has been broadcast on NPR’s International Concert Hall.

His website BAD MIND TIME™ ( received awards from New Media and PRINT magazines, and won the Audience Award at the Stuttgart Filmwinter Festival for Expanded Media. It has been included in digital media exhibitions internationally. His films have been shown at the Brainwash (Oakland, CA) and Zero Film Festivals (Los Angeles, New York, Toronto, London).

In 2011, he wrote Experimental Media Voodoo™: Hybrid Forms and Syncretic Horizons, available free online at

Bargsten won one of the 2013 Miami Knight Arts Challenge Awards for the presentation in 2015 of his new transmedia opera MELANCHOLALALAND™( ).

Bargsten taught at University of Iowa, Georgia Tech (sabbatical replacement, 1 semester), and University of Oregon before his current appointment at Florida Atlantic University.

Quartet Version:



PERFORMANCE REQUIREMENTS, STAGE LAYOUT, SCORE & OTHER CONSIDERATIONS: Performers (4): 2 VN/ VLA/ VC • Each performer has laptop, tablet, or smartphone capable of running Puffin, Chrome, FireFox, or Safari browsers, with Adobe Flash Player installed • WiFi network • Performers can wear headphones/earbuds if they want to hear the metronome guide on the notation (optional). • Each performer needs a sturdy music stand to hold the laptop/tablet/smartphone • Ensemble may be amplified if the acoustics of the space require it • Timekeeping of the score can be achieved through a stopwatch app running concurrently on phone, laptop, or tablet.

Go to mNotation™ link below to open and run software in your browser. An overview of how to use the software is here:


Large Ensemble Version:

score-staging  •





Not only can this work be performed as a concert version in a traditional recital or concert hall, but it is scalable and ‘de-centralizable’ in an art gallery space. If members of the audience are not ‘locked down’ into their seats, they can wander among the musicians and watch the notation generate in real time as the musicians do.

A more de-centralized performance setup also optimizes the acoustics of the space, as members of the ensemble can be placed in acoustically live or resonant points in the gallery, providing a sort of natural amplification for delicate moments in the score. More prominent instruments (like the saxophones) could even be placed in balconies, entryways, hallways, or exterior spaces to both invite attention to the performance, and lend another level of acoustic immersion to the experience of the gallery audience. These acoustic and performative ‘balancing acts’ would need to be arranged during rehearsal, but I mention them as a potential way to customize the work as a sort of site-specific live musical ‘installation.’



The form of the piece is mostly symmetrical with regard to the relative density of sonic events. The duration is subdivided into sections based on the Fibonacci series and presented as a palindrome ( 1 – 2 – 3 – 5 – 8 – 5 – 3 – 2 – 1), which is echoed in the physical placement of the instruments (see score in supplemental PDF).

Each section is assigned a particular density level and musical sub-vocabulary. All musical activity is determined by the mNotation™ software system, which can generate music notation for any duration of time and any combination of instruments.

Each member of the ensemble needs a laptop, smart tablet, or smartphone in order to read the generated notation. Tablets and phones need the Puffin browser, while laptops can use any modern browser.