Free improvisation and electronic music composition

Plan Author

  • Tim Bedford, 2017

Fields of Concentration

  • Music

Sample Courses

  • Course: Gadgets: An Electronics and Microcontroller Lab
  • Course: The Act of Improvization
  • Tutorial: Musical Improvization Software
  • Tutorial: Sonic Practice: Interdisciplinary Approach

Project Description

Between the object and me: Free improvisation solo and duo analysis paper.

Faculty Sponsors

  • Matan Rubinstein
  • Jim Mahoney

Outside Evaluator

  • Daniel Warner, Hampshire College

Overview

This Plan is a study of free improvisation through writing, performance, and computer programming, and a collection of electronic music compositions. The components include a paper exploring free improvisation, titled “Between the Object and Me,” and a concert of improvised music. It also includes the development of DAVIS (Daft Architecture Vocal Improvisation System), an interactive music system designed to perform improvised music alongside a performer.

Excerpts

In order to better explore these human-object relationships, it is important to distinguish between individual creative autonomy and a creative exchange between performers. For this reason, it is helpful to consider what exactly differentiates a solo from a duo in free improvisation. The distinction is much greater than the presence of one extra improvisor. When writing about their practice, many improvisors say that solo and duo improvisation require two completely different approaches. Unable to generate material from interactions with other improvisors, the soloist may fall into “a state of terror, with only the instrument to cover your nakedness.”

When I began developing my own IMS, I decided not to continue the search for conceptual spaces or to aspire for “strong” interactivity. I wanted to create something that I could use to extend my investigations into objecthood. I named my system the Daft Architecture Vocal Improvisation System (DAVIS). DAVIS is designed to be as conceptually comprehensible to me as possible. Musical notes are extracted from incoming audio, repeating series of note parameters —pitch and duration—are detected and these patterns—or “motifs,” as I label them—are combined to create synthesized notes with a human voice-like synthesizer. I designed a simple interface that shows me all the motifs DAVIS has stored, and which are currently playing, as I perform. My goal was to create a system whose processes I could fully understand, but still engage with as an improvisational partner. In response to Voyager and NN Music’s often overwhelming polyphony, I restricted DAVIS to monophonic output. I wanted to make DAVIS’s sound simple yet recognizable. The flat sound of the synthesized voice was intended to hint at humanity, but sound fake enough to keep me thinking in terms of the non-human.

Reflections

I remember late nights, stress, having eureka moments that would throw the Plan off course so I have to make notes to explore them later. Having code, writing, and performance together and all thematically linked would probably be the most exciting part for a prospective student. I don’t think any of the three parts is particularly impressive, but being able to work on all three at once was very rewarding and felt very Marlboro.

I was inspired by similar work in the fields of improvised music and computational creativity. The relationship between the components was pretty consistent, but the focus of the paper changed a lot, right up until the end (hence the stress). I will likely have a career in software, and getting to work on a software project over a period of several months was very formative for my approach to programming.