“I see the player piano as the grandfather of the computer, the ancestor of the entire nightmare we live in, the birth of the binary world where there is no option other than yes or no and where there is no refuge.”
So wrote famed American novelist William Gaddis, whose interest in the antiquated machines began in the late 1940s, becoming the embodiment of the author’s concerns over the increased mechanical reproduction of the arts that took place following World War II. It was an obsession that would eventually turn into his final posthumous work, Agapē Agape, originally meant to be a social history of the self-playing piano before an allegedly overwhelmed Gaddis converted it to fiction.
A mirrored struggle to Gaddis’ could be imposed onto Pete Townshend’s infamous unfinished early-70s rock opera Lifehouse, which stumbled in part because of the guitarist’s obsession with the new VCS3 and ARP synthesizers, into which he envisioned feeding biographical data from audience members, which would culminate in a “universal chord.”
But whether discussing Gaddis’ reactionary fears (itself an ancestor to “Drum machines have no soul.”) or Townshend’s progressive ambitions (a musical “singularity” horizon) confronting technology has often been the downfall for otherwise great visionaries. Futurism might be a rich and varied concept that has given rise to many amazing works of music, from Sun Ra to Cybotron to Daft Punk. But the future itself? That’s a much trickier proposition.
That is not to say that music driven by futurism can’t itself manifest the future. It’s hard to argue with Kraftwerk’s robot aesthetic as a major leap for musickind, generating novel sounds, along with the new style that would embody “future” music from the 80s onward. The same could possibly be said for Richie Hawtin’s 2001 DE9: Close To The Edit mix, which foretold the laptop-enabled deconstructionist mode of music playback that freed DJs from the shackles of arrangement within songs, and has now been manifest in a million control triggers seen throughout all styles of electronic music.
Of course, these innovations did not exist in a vacuum, but they are clear mile-markers in the rear view mirror of musical development—ones that took control of the means of production (instrument construction and software programming, respectively), as well as the artistic product itself. Inventing new tools to make futuristic art that becomes the future of said art.
The natural assumption seems to be that we are due for another move forward, and soon. For all of it’s growth in cultural cache, even after surviving a post-Millennial push towards the retro, electronic music, as an arbiter of the hi-tech, seems stuck in a rut.
This is in no way to say that individuals aren’t creating exciting, fulfilling and inventive music. That is a debate that no side can possibly win, and arguing the macro-view with micro examples is inherently futile. But as a recent essay by Adam Harper entitled “Sci-Fi, Hi-Tech, Future?” revealed, even the cutting edge sounds a bit dull, especially when compared to what Afrika Bambaataa’s 808 must have sounded like the first time it was played in the Bronx.
So which way forward? As was pointed out at the start of this article, searching for such enlightenment has stunted far more philosophically sophisticated men. However, that doesn’t mean we can’t have fun pondering some possibilities.
One of interest might be gleaned from Squarepusher’s recent foray into robotic playback on last month’s Music For Robots EP. Working with the Japanese Z-Machines robot band, the always-adventurous producer composed music specifically designed to be physically played back by the machines. The possibilities were, of course, much broader than one could compose for human musicians and their limited mechanics (what drummer has 22 arms?). But it was also limited, as the artist himself pointed out, saying:
“Each of the robotic devices involved in the performance of this music has its own specification which permits certain possibilities and excludes others – the robot guitar player for example can play much faster than a human ever could, but there is no amplitude control. In the same way that you do when you write music for a human performer, these attributes have to be borne in mind – and a particular range of musical possibilities corresponds to those attributes. Consequently, in this project familiar instruments are used in ways which till now have been impossible.”
Ultimately, music might step forward by bringing back the physical mechanics of sound generation, a concept almost completely set aside since music-making machines began going straight to tape, and are now completely internalized on hard drives. We’re taking more than live drummers on dance records or gimmicky robot players. Recent innovations in prosthetics might point to a time when humans can translate motor-signals from the brain to physical gestures of incredible dexterity, far beyond what our mere muscles and bones can achieve.
In the other direction, developments in AI are rapidly approaching the point where machines will start making music themselves, without specified input from a human operator. In an interview for The Quietus last week, experimental electronic musician Holly Herndon postulated of a future where machines make music not for humans at all.
“There’s a physical reason for the consonant sound of most human music, relating to the overtone series and the functioning of the ear. So if it’s no longer a human-centered universe, then I could see it getting quite dissonant for human ears and maybe extremely rhythmically complex. Maybe we’re still stuck to the bpm of our pregnant mothers’ heartbeat. Maybe it would be in tune with the 50 hertz hum of electricity.”
Her concept is utterly fantastical, and highly unlikely. But someone between Gaddi’s player piano paranoia and Herndon’s Matrix-esque predictions, the future of music is out there.