Urban Auscultation; or, Perceiving the Action of the Heart

How we listen to the city is as important as what we are listening for.

Red Hook, Brooklyn street scene
Red Hook, Brooklyn, April 2020. [Steven Pisano]

For months, the Covid-19 virus has passed from body to body around the world. Its corporeal work is silent, but it reshapes the soundscape wherever it goes. 1 Coughs and sneezes turn paranoid heads; ventilators whoosh in hospital rooms; streets go suddenly quiet, as people shelter inside. Kids home from school create a new daytime soundtrack, and neighbors gather on balconies in the evening, to sing together or applaud health workers. As physicians monitor the rattle of afflicted lungs, the rest of us listen for acoustic cues that our city is convalescing, that we’ve turned inward to prevent transmission.

As we adjust to new spatial confines, to an altered sense of time, we also retune our hearing.

These new sounds and silences are so affecting because cities have long been defined by their din: by the density and variety of human voices and animal sounds; the clamor of wheels on cobblestones; the mechanical clangs, electrical hums, and radio babble; the branded ringtones and anti-loitering alarms. Most hearing people are adept at interpreting the cacophony. 2 We know which of the sounds within our radius need attention and which can be ignored. 3 At times of crisis or change, our senses are heightened, recalibrated. As we adjust to new spatial confines, to an altered sense of time, we also retune our hearing. Seismologists, for instance, have registered the Covid-19 shutdowns as a quietness that helps them perceive tectonic movements. 4

But contextual shifts are not always as sudden as a viral pandemic. We are constantly revising the way we listen to the city, and for at least a century our aural capacities have been growing in the direction of urban surveillance and public health. With technology, we track sounds over greater distances, at different timescales and intervals, discerning patterns and aberrations that are often encoded as symptoms, so that we (or our public officials) can diagnose problems and apply cures. Indeed, many of the modern technologies used to sound out the city are inspired by diagnostic tools from medicine and psychology. Through these soundings, we grasp the city’s internal mechanics, assess the materiality of its parts, analyze its rhythms. 5 And those two domains, surveillance and health, are increasingly entwined with a third, machine intelligence. 6

Under the Brooklyn-Queens Expressway
Under the Brooklyn-Queens Expressway, April 2020. [Steven Pisano]

With all the attention given to urban applications of machine vision — from facial recognition systems to autonomous vehicles — it’s easy to forget about machines that listen to the city. Google scientist Dan Ellis has called machine listening a “poor second” to machine vision; there’s not as much research dedicated to machine listening, and it’s frequently reduced to speech recognition. 7 Yet we can learn a lot about urban processes and epistemologies by studying how machines listen to cities; or, rather, how humans use machines to listen to cities. Through a history of instrumented listening, we can access the city’s “algorhythms,” a term coined by Shintaro Miyazaki to describe the “lively, rhythmical, performative, tactile and physical” aspects of digital culture, where symbolic and physical structures are combined. The algorhythm, Miyazaki says, oscillates “between codes and real world processes of matter.” 8 The mechanical operations of a transit system, the social life of a public library, the overload of hospital emergency rooms: all can be intoned through algorhythmic analysis.

Our tools for urban listening embody particular ways of knowing the city.

How we imagine ourselves as listening subjects, as hearing bodies, informs how we make sense of our sonic environments. As we listen to the city with both human and machinic ears, we sound it out as a particular kind of resonant or reflective body or system. If we are constantly listening for alien accents or breaking glass and gunfire — as some automated police systems do — we might imagine the city as a body that needs protection from threats. If our stock-trading bots equate the hum of vehicular traffic with economic production, we might be alarmed by quiet streets. If, instead, we listen to the city at macro scale, as an ecology of diverse lifeforms and resources and habitats, we might recognize a dynamic, vital system to be stewarded for future generations of humans and other species. When the sounds of the pandemic recede, how will our hearing be changed? Our tools for urban listening embody particular ways of knowing the city, with implications for how the city is designed, administered, policed, beautified, and maintained.

In other words, how we listen to the city is as important as what we are listening for. Amid the rise of artificially intelligent, algorithmically attuned ears, scoring the city in accordance with their own computational logics, we humans need to better understand our own acoustic agency so that we can make thoughtful choices about how to supplement our ears with machinic ones. In a world defined by climate crisis, surveillance capitalism, and the periodic collapse of global health, we need to think as much about a city’s resonance as we do its resilience and livability.

 Diagram from Austin Flint and J. C. Wilson, A Manual of Auscultation and Percussion (1890).
Diagram from Austin Flint and J. C. Wilson, A Manual of Auscultation and Percussion (1890). [Internet Archive]

The Stethoscope

Cities historically were compared to organic bodies, and many tools for sounding out the city were developed by first listening to ourselves. The human body is a resonance chamber whose particular sonic qualities can reveal its condition of well-being. To diagnose a patient with fluid in the lungs, Hippocrates advised a technique called succession: “you will place the patient on a seat which does not move, and an assistant will take him by the shoulders, and you will shake him, applying the ear to the chest, so as to recognize on which side the sign occurs.” 9 Leopold Auenbrügger, in 1761, proposed a slightly less violent method, percussion, which involved striking the body and listening to its internal resonances to locate diseases of the lungs and heart. 10

Laennec advocated for the use of instruments to ‘mediate’ the physician’s attention to audible movements inside the body.

Yet even in Auenbrügger’s time most diagnoses relied on the doctor’s visual examination and the patient’s subjective testimony. There was no need to listen deeply to the body because pathologies were attributed not to deep internal causes, but to an imbalance of humors. As autopsies became more widely accepted, those deeper causes were eventually revealed. But “without the larger ideological edifices of empiricism, pathological anatomy, and physiology,” Jonathan Sterne observes, physicians “found listening to the interior of the body to have no practical, informative purpose. … It was only when the body came to be understood as an assembly of related organs and functions that percussion … would take on such a primary role in medical diagnosis.” 11

The stethoscope, introduced by René-Théophile-Hyacinthe Laennec in the early 19th century, marks an epochal turn in the histories of listening and medicine. Treating a young, corpulent woman “laboring under general symptoms of a diseased heart,” Laennec found that her gender, age, and girth made “direct auscultation” — laying ears and hands on her body — “inadmissible.” And so he “rolled a quire of paper into a sort of cylinder and applied one end of it to the region of the heart, and the other to my ear, and was not a little surprised and pleased, to find that I could thereby perceive the action of the heart in a manner much more clear and distinct than I had ever been able to do by the immediate application of the ear.” 12 Laennec’s 1819 treatise De L’auscultation Mediate (On Mediate Auscultation) advocated for the use of instruments to “mediate” the physician’s attention to audible movements inside the body.

Drawing of Laennec’s stethoscope
Laennec’s stethoscope, from his treatise on mediate auscultation. [Wellcome Collection]

The stethoscope further mediated a transition in medicine and its ways of knowing. According to Sterne, Laennec’s followers cultivated an “audile technique,” a rational mode of observation, that was “instrumental in reconstructing the living body as an object of knowledge.” Mediate auscultation placed a new physical distance between the doctor and patient, and it established sound as a source of medical data. Seeking to validate listening as a scientific method, practitioners created a taxonomy of the body’s internal sounds, a “new medical semiotics,” with each sound indexically representing a specific movement of liquids or gases. Auscultation, Sterne writes, was a “hydraulic, physiological hermeneutics.” 13

From here it is not a far leap to contemplate the stethoscope being used to listen to other systems. Natural philosopher Robert Hooke had already imagined the sounding body as a machine or factory:

There may be … a Possibility of discovering the Internal Motions and Actions of Bodies by the sound they make, who knows but that as in a Watch we may hear the beating of the Balance, and the running of the Wheels, and the striking of the Hammers and the grating of the Teeth, and Multitudes of other Noises; who knows, I say, but that it may be possible to discover the Motions of the Internal Parts of Bodies, whether Animal, Vegetable, or Mineral, by the sound they make, that one many discover the Works perform’d in the several Offices and Shops of a Man’s body, and thereby discover what Instrument or Engine is out of order. 14

Monitoring the body’s functions with new technical and conceptual instruments required specialized knowledge, which elevated physicians’ social status. Over time, tools like specula, endoscopes, X-ray machines, and MRIs enabled further investigation of internal causes for external symptoms, and the discovery of maladies with no external expression. 15 Yet the stethoscope has a special place in history, as the instrument that first registered a new way of knowing. Auscultation — mediated listening — is fundamental to modern life. Indeed, Sterne links the instrumentation of medicine to the growth of industrial cities. “Medicine itself industrialized,” he says, “in gaining a more rationalized structure; in taking shape as a self-conscious profession; in a heavier investment in the discourses of science and reason; and, finally, in its adoption of technology.” 16

Map of noise level readings and photo of sound meter in the 1970s
Left: Noise level map of San Francisco, from the 1974 city plan. [via Eric Fischer] Right: Measuring the noise of a Boston subway train in 1973. [U.S. Environmental Protection Agency/National Archives]

The Sound Meter

The professionalization of landscape architecture and city planning were not far behind. The 19th-century urban reformers who portrayed the city as a body, with its own circulatory, respiratory, nervous, and excretory systems, drew heavily on medical discourses of the day. 17 As early as the 1850s, designers joined forces with health officials to push for public sanitation measures, water and waste removal infrastructures, and amenities like playgrounds and bathhouses. Public parks, “the lungs of the city,” were prescribed to clear out the “miasma” of urban decay and filth. But as physicians traded humoral pathology for empirical science and clinical physiology, they came to understand that infectious diseases were caused by germs, not foul air. Early 20th-century planners influenced by models like Baron Haussmann’s Paris, Daniel Burnham’s City Beautiful, and Ebenezer Howard’s Garden Cities conceived land-use zoning as a way of “immunizing urban populations from the undesirable externalities of the economy.” 18

As combustion engines, horns, and sirens proliferated, the city-as-body was becoming a “machine.” Public advocates warned about the effects of noise exposure on both the urban body and the human bodies living within it. 19 City-dwellers sought respite in libraries and other cultural spaces, often sited in a park-like setting, removed from the sullying racket of the business district. Urban reformers wrote the first noise ordinances and sound-sensitive zoning policies. In 1906, Julia Barnett Rice, a non-practicing medical doctor, founded New York’s Society for the Suppression of Unnecessary Noise, which lobbied for quiet zones around city hospitals and national legislation like the Bennet Act, which regulated boat whistles in urban harbors. 20 Soon afterward, philosopher Theodor Lessing founded the German Association for the Protection from Noise, which convinced some cities to install noise-dampening pavements and regulate train signals and steam hammers. 21

The first urban noise surveys, produced in the 1920s, revealed the limits of efforts to instrumentalize and objectify hearing.

And the new urban administrative machine required new tools to regulate the machinic environment. The portable audiometer produced a “subjective” measure of loudness; its operator compared a test sound with a reference tone, which could be dialed down until it was masked by the sound under investigation. A later technology, the acoustimeter, added a microphone, amplifier, and indicator signal, eliminating the need for user judgment. These new tools of urban auscultation were combined with a new unit of measurement, the decibel, to produce the first urban noise surveys in London, New York, Chicago, and Washington, D.C., in the 1920s. As Karin Bijsterveld notes, “Although audiometers were at first used in a strictly medical context to test hearing, the city turned out to be a crucial context for [their] development and application.” 22

This context quickly revealed the limits of efforts to instrumentalize and objectify hearing. The meters couldn’t replicate the way human ears perceived loudness, and they had trouble tracking fluctuating sounds. Bell Labs’ Rogers Galt, who reviewed urban sound surveys for the Journal of the Acoustical Society of America in 1930, emphasized the subjective, situational nature of aural perception. Whether a sound was perceived as noise, he wrote, depended on how long it lasted and how often it occurred, whether it was steady or intermittent, who made the sound, who was disturbed, and whether the sound was understood as necessary. 23 “Noise” was a product of acoustics and psychology.

Whether or not cities actually were too loud, measurable “noise levels,” with their positivist certainty, “became the sign of how bad the situation was.” 24 Public health concerns were taken seriously only after noise exposure could be quantified. Leonardo Cardoso, in his study of sound politics in São Paolo, argues that the seemingly objective measurements produced by sound-level meters came to “replac[e] our ears as the authoritative hearing actor” and ultimately conditioned our hearing to a world that the instrument could validate. “Through the minuscule repetition of a series of exposures to sound that are allowed to exist thanks to the [meter’s] validation, this technological being” has reshaped our own organic perceptual instruments. 25 We became attuned to what the machine is capable of sensing.

Acoustic sensing unit on a New York City street. [SONYC]

The Sensor Array

Quantifiable levels play an even larger role in defining urban performance in the so-called smart city. 26 As our cities grow increasingly datafied, algorithmically filtered, and optimized for efficiency, they require new instrumentation. Noise is a common target for machine listening, as a quality-of-life issue (one of the top complaints to New York City’s 311 line, for example) that is hard to police through analog methods. Many cities, including New York, Dublin, Sydney, Paris, and Singapore, have deployed distributed networks of sound sensors to assess urban noise. The Sounds of New York City (SONYC) project, run by NYU’s Center for Urban Science and Progress and developed in collaboration with the city departments of health, environmental protection, and parks and recreation, has placed dozens of sensors to “monitor, analyze, and mitigate noise pollution.” 27 Each node includes a microphone and a small Raspberry Pi computer, and the data are processed by machine listening — specifically, by artificial intelligence trained on audio datasets annotated by “citizen scientist” volunteers according to a taxonomy of urban sounds. The aim is to extract “meaningful information” from environmental audio, so that cities can identify and target specific sound sources that present problems, like jackhammers, idling engines, loud HVAC, barking dogs, or car horns. 28

As our cities grow increasingly datafied, algorithmically filtered, and optimized for efficiency, they require new instrumentation.

The SONYC team has also created a visualization tool, Urbane, that generates a 3D map of a city’s sound data over time and connects it to other urban data streams, so that local governments can efficiently schedule inspections at sites of potential noise code violation. Claudio Coletta and Rob Kitchin propose that such systems could be made “algorhythmic,” or responsive to urban flows and fluctuations across seasons, days of the week, and times of day. Planners could correlate noise readings with data about road surfaces, vehicle counts, traffic speed, topology, and other variables to create daytime and nighttime sound maps that inform noise reduction policies. Here, Coletta and Kitchin write, “we have a set of algorhythms at work, algorithmically measuring, processing, and analyzing urban sound and its rhythms.” 29

SONYC’s makers argue that the system will also provide timely information to “those in a position to control emissions” — construction-site managers, truck drivers, pet owners, and so on — and incentivize “self-regulation.” 30 Self-regulation is a key principle of “performance zoning,” which proposes that urban residents can do as they please in their homes and businesses and public spaces, so long as they don’t exceed certain thresholds for noise, toxic emissions, and other measurable behaviors. In a city zoned by performance standards, algorithmic auscultation with embedded sensors could be a means of discipline and regulation. 31 As Cardoso foretold, the acoustic panopticon — the panacousticon — would compel human bodies to operate in accordance with its machinic logic. 32

Data ethicists warn that the racial and gender biases built into our measuring machines will further inequities in care.

I’ve written elsewhere about the convergence of algorithmic planning and “smart cities” with biometrics and “precision medicine” — the pursuit of optimized cities that cultivate optimized bodies. 33 We can imagine a future city whose acoustic qualities are computationally tuned to promote physical and mental health. (Researchers have already proposed using computer audition to monitor the spread of Covid-19 and ensure social distancing.) 34 Yet data ethicists warn that the racial and gender biases built into our measuring machines will further inequities in care, as they have in medicine and in the provision of urban services like housing and policing. 35

The turn toward algorithmic city planning mirrors what is happening in medical offices. Some health professionals worry that the stethoscope is going out of fashion, supplanted by echocardiography and handheld ultrasound devices that increase the physical and affective distance between doctor and patient. Yet anthropologist Tom Rice finds that some physicians remain committed to auscultation as an “index of sympathetic and empathetic medical practice.” 36 So, too, could we commit ourselves to sounding out the city with more empathic modes of instrumented listening.

Air conditioners in Singapore
Air conditioners in Singapore. [Peter Morgan]

Listening to Systems

In the 1960 and ’70s, acoustic ecologists like R. Murray Schafer, Barry Truax, and Hildegard Westerkamp developed qualitative, subjective methods for studying relationships between humans and their environments. These researchers deployed field units to make comparative site recordings across time, invented annotation systems, made maps, and experimented with alternative ways to visualize sonic data. 37 In the decades since, their followers have conducted longitudinal sound studies that reveal insights about climate change, species loss, urbanization, gentrification, and other forms of environmental and social change. 38 The sociologist Henri Lefebvre, in an essay published after his death, proposed an embodied practice of rhythmanalysis, a way of mediating urban perception with one’s physical presence. The rhythmanalyst, Lefebvre says, “listens — and first to his body; he learns rhythm from it, in order consequently to appreciate external rhythms. His body serves him as a metronome.” The body is a means of mediate auscultation; the rhythmanalyst approaches the city as a physician would, listening for “malfunctions of rhythm, or … arrhythmia.” 39

Lefebvre advised that we ‘listen to a house, a street, a town, as an audience listens to a symphony.’ We should also listen to urban systems like transit and public health, regardless of their musicality.

This is a holistic practice, extending across spatial and temporal scales. Lefebvre advised that we “listen to a house, a street, a town, as an audience listens to a symphony,” discerning the role of each agent, or instrument, in composing the whole. 40 We should also listen to urban systems like housing and transit and public health, regardless of their musicality. To be good stewards of (or interventionists in) these systems, we must be able to recognize submerged sounds and obscure patterns, with and without machines. As a viral pandemic sweeps the world, we can supplement our reading of public health statistics by listening with our bodies to the street and the supermarket.

Using the example of a car engine, Bijsterveld draws a distinction between “monitory listening,” which tells drivers whether the internal mechanisms of a system are working as they should, and “diagnostic listening,” which experts use to identify internal problems based on a taxonomy of aberrant sounds. 41 These two modes of listening are constantly happening all around us, and they are crucial to the maintenance and care of the city’s technical and social infrastructures. 42 Civil engineers, for example, listen to ambient vibrations, harmonic excitations, and wave propagation to detect structural weaknesses in buildings and bridges and transit beds. And advanced instruments help us listen across urban scales that are not easily heard by human ears or bodies. Researchers in Alister Smith’s Listening to Infrastructure lab at Loughborough University study sensors that monitor high-frequency “acoustic emissions” from “geotechnical assets” (buried pipelines, foundations, retaining structures, tunnels, and dams) in order to assess their condition, locate weakness, and target maintenance work. 43

This work to auscultate infrastructure, to render it sensible, helps us appreciate how much listening we have ceded to machines.

This applied research extends a tradition among artists who have sounded out infrastructural elements. For the centennial of the Brooklyn Bridge, in 1983, Bill Fontana mounted eight microphones under the bridge’s steel grid roadway and broadcast live sounds at the World Trade Center plaza. In 1999, Stephen Vitiello spent six months in residence on the 91st floor of the World Trade Center, recording how Tower One swayed and creaked with the wind. Such works make sensible the micro-rhythms and macro-scale physical stresses that infrastructures withstand and amplify the distinct mechanics of their materials and construction techniques. 44 Other artists have encouraged listening to technical and media infrastructures, such as WiFi networks, cell connections, and the global positioning system. Since 2004, Christina Kubisch has hosted “Electrical Walks” in several dozen cities. Participants wear specially designed headphones that translate electromagnetic signals into audible sounds, disclosing the waves and particles — generated by activities like ATM transactions and CCTV surveillance — which perpetually envelop and penetrate urban bodies. Similarly, Shintaro Miyazaki and Martin Howse use logarithmic detectors, amplifiers, and wave-filter circuits to transform electromagnetism into sound, revealing the “rhythms, signals, fluctuations, oscillations and other effects of hidden agencies within the invisible networks of the ‘technical unconscious.’” 45

This work to auscultate infrastructure, to render it sensible, helps us appreciate how much listening we have ceded to machines. Turbines, windmills, freezers, vent fans, and hard-to-access machines in the off-limits “clean rooms” of pharmaceutical and tech manufacturing facilities — all signal their health to system operators by chugging along with a consistent tone and rhythm. AI can purportedly predict and prevent infrastructural snafus by scanning for idiosyncrasies within high-performance systems. 46 Some players in the predictive analytics field build training sets with sound samples of well-behaved machines, while others listen across a wide array of systems, identify anomalies, and then invite human engineers to help them analyze and classify the aberrant sounds. Humans also play a mediating role as liaisons between automated sonic analysis and the deployment of emergency services or maintenance workers. A manager overseeing a water treatment plant during a violent storm might rely on a dashboard of sonic alerts to pinpoint mechanical failures and then dispatch staff — or robots — to fix the problem. In the future, this auscultative agent might be the only human in the facility.

Athanasius Kircher, panacousticon
Athanasius Kircher’s 1650 design for eavesdropping in a public square. [via Tobias Ewe]

Listening to Ourselves and Each Other

And sometimes the city’s artificially-intelligent ears are turned on us. Xiaochang Li and Mara Mills describe the historical role of “vocal portraits” in criminal records. Since the early 20th century, police departments across the U.S. and Europe have recorded and archived voices for forensic purposes — to aid in speaker identification, or to allow researchers to identify supposed qualities of the criminal character. Today, international law enforcement agencies use software to match speech samples from phone calls and social media posts with “voice-prints” in a shared database. China has reportedly linked voice-prints to transit ticket machines, health care and educational systems, and citizens’ national IDs. 47

Speech recognition also helps investigators probe alibis and insurance companies verify claims. Layered voice analysis software can purportedly detect lies and incriminating affective qualities like embarrassment, overzealousness, anxiety, or an “attempt to outsmart” the interviewer. Embedded in the software are algorithmic shibboleths: inflections, catchwords, expressions, and marks of dialect that act as “aural biopolitical signatures” of individual identity. 48 At immigration offices, forensic linguists scrutinize accented voices to determine whether they match the traumatic narratives presented in asylum claims, a disembodiment that artist Lawrence Abu Hamdan says violates the principle of habeas corpus, which stipulates that the accused must appear before the judge in recognition of the fact that “the voice is a corporeal product” whose semantic and forensic value exceeds its written documentation or audio recording. The voice has a body. He proposes that heterogenous, hard-to-place accents should be understood as a “biography of migration,” a sonic composition that defies the body’s identification with a single nation-state. 49

And a growing number of schools, prisons, hospitals, and city governments deploy audio analytics to passively monitor their populations. The Dutch company Sound Intelligence, founded in 2000, makes software that scans voices in the environment for signs of fear, anger, and duress, and then summons authorities or records a sonic event for forensic purposes. This “aggression detection” software is loaded on microphones made in California by Louroe Electronics and security cameras made in Sweden by Axis Communications, which are then marketed through school safety and law enforcement catalogs and conventions. (Sound Intelligence also offers systems that detect and geolocate gunshots and broken glass.) 50 While some customers told ProPublica that these products have become “indispensable” in their operations, reporters found that the systems were often hyper-sensitive and unreliable, interpreting rough, high-pitched voices — like those you might hear often in high-school gyms and cafeterias — as aggressive, and even mistaking slammed lockers for gunshots. 51

Audio surveillance system by Louroe Electronics.

Such machines can listen from the macro to the micro scale, taking in the chatter of an entire concert hall or public square and then homing in on the granular properties of an individual voice. A large enough network of automated ears could hypothetically listen at the scale of the city, identify anomalies, and sonically access the interiority of urban subjects, discerning their identity or intention, their humor or their health. Which, again, underscores the role of human arbiters. Just as we want an empathetic physician at the other end of the stethoscope (and a robust health department and licensing board setting the terms of that relationship), we should want a qualitative methodologist to contextualize urban noise data, a human engineer to make sense of recorded vibrations in buried pipelines, an asylum-seeking body present before the judge to defend herself with the full power of her voice. Machines might be used to listen widely, to identify general areas and issues of concern, but we should then follow up with diverse, localized, qualitative methods of investigation. And sometimes it’s best that we not listen at all — that we let the city’s sounds be ephemeral and private and inscrutable.

We might imagine a machine listening system serving a compositional role, creating soundtracks that report the operational status of transit or waste management systems.

Sarah Barns proposes that we recognize the future city as a “complex field of cognition, computation, desire and experience,” an assemblage of vibrating, resonating, listening, sounding machines and bodies, including those of other species. 52 The polyphonic city contains many distinct ways of sensing and knowing, of diagnosing and healing, our selves and our spaces. Perhaps listening machines — rather than making scripted determinations about what “meaningful” information is extracted from the sonic environment — could be recruited by cities or community groups or artists to amplify the messy richness of that assemblage, or to highlight the machines’ own subjectivity, or to compel us to listen to ourselves, and our machines, listening. 53

Attending to whole ecologies, rather than specific sounds, reminds us that we live amid great biodiversity, and that listening can be a means of caring for those ecologies, rather than controlling or disciplining them. 54 For example, the Manchester-based company Sensemaker designs bespoke kits that enable journalists to gather recorded audio and local biodata that can prompt investigative reporting and editorial responses. Perhaps those sensor kits could be used for sonic investigations of questions like why all the warblers have left the city park, or what it means that traffic noises have increased in a neighborhood adjacent to rezoned territory. 52 Another example: experimental musician Julianna Barwick and music technologist Luisa Pereira created a generative soundtrack for a New York hotel lobby, using a rooftop camera that told a computer about environmental conditions, cuing up looping synthesizers and breathy voices to register the presence of birds or airplanes, moonlight or clouds. 55 We might imagine a machine listening system serving a similar compositional role, creating soundtracks that report the operational status of transit or waste management systems.

Juliana Barwick's generative lobby score for Sister City
Screenshot from a video promoting the use of Microsoft’s Custom Vision AI in Julianna Barwick’s generative lobby score for the Sister City hotel. [via YouTube]

Other projects push back against machine listening by surfacing its flaws. “Laughing Room,” by Hannah Davis and Jonny Sun, and “Hey Robot,” by Everybody House Games, are interactive games that invite players to read the computer’s “personality” in order to trigger its canned laughter or elicit other responses. 57 As we probe a machine’s glitches and failures, we get a better sense of the logics by which it operates, the taxonomies and training sets that underscore its performance, the way it operationalizes affect through keywords and speech patterns. Our laughter is a means of auscultating the machine itself.

Then again, human auditors are glitchy, too. 58 We harbor sonic prejudices and modulate our attention when we hear particular rhetorical registers or vocal affectations. And we are conditioned by our class, race, gender, and personal and cultural histories to tune in to, or out of, particular environmental sounds: traffic noise, loud neighbors, street music, howling winds, crying babies, braying animals. Like machine algorithms, we run on biased training sets.

Recognizing the logics and illogics of automated systems can help us see the variables that condition our own practices of immediate auscultation, and the sounding and listening capacities of the other entities who share our environments. A polyphonic mode of distributed listening helps us appreciate how our actions — making music and noise, building and maintaining infrastructure, tracking and monitoring fellow citizens, creating acoustic space for bodies to rest and heal — reverberate across time and space, and beyond the range of human ears.

Tree and shadow. Clinton Street, Brooklyn, April 2020. [Steven Pisano]

Author’s Note

Thanks to Omar Berrada and Leslie Hewitt, who invited me to share this work as part of their Intra-Disciplinary Seminar at The Cooper Union in December 2019. I’m grateful, too, to the event’s attendees for their helpful feedback.

Notes
  1. See, for example, Jessica Wang and Vivien Ngo, “How Coronavirus Has Changed the Sound of Our Cities,” The Wall Street Journal, April 16, 2020; Andy Newman, “What N.Y.C. Sounds Like Every Night at 7,” The New York Times, April 7, 2020; and Cities and Memory, “Sounds From the Global Covid-19 Lockdown” (2020).
  2. For more on disability, accessibility, and urban design, see, for instance, Jos Boys, Ed., Disability, Space, Architecture: A Reader (Routledge, 2017); Gallaudet, DeafSpace; Gill Harold, “Reconsidering Sound and the City: Asserting the Right to the Deaf-Friendly City,” Environment and Planning D 31:5 (2013), https://doi.org/10.1068/d3310; Sarah Holder, “How to Design a Better City for Deaf People,” CityLab, March 4, 2019; Alexa Vaughn, “DeafScape: Applying DeafSpace to Landscape,” Ground Up 9.
  3. See Brian Larkin, “Techniques of Inattention: The Mediality of Loudspeakers in Nigeria,” Anthropological Quarterly 87:4 (2014), 989-1015, https://doi.org/10.1353/anq.2014.0067.
  4. Robin George Andrews, “Coronavirus Turns Urban Life’s Roar to Whisper on World’s Seismographs,” The New York Times, April 8, 2020.
  5. Henri Lefebvre, Rhythmanalysis: Space, Time and Everyday Life, trans. Stuart Elden and Gerald Moore (Continuum, 2004).
  6. Shannon Mattern, “Databodies in Codespace,” Places Journal, April 2018, https://doi.org/10.22269/180417
  7. Dan Ellis, “A History and Overview of Machine Listening,” Computational Audition Workshop, London, May 12-14, 2010, https://doi.org/10.7916/D8Q81N84. Ellis made this proclamation a decade ago, but it still rings true, or “echoes.”
  8. Shintaro Miyazaki, “AlgoRHYTHMS Everywhere: A Heuristic Approach to Everyday Technologies,” in Jan Hein Hoogstad and Birgitte Stougaard Pedersen, eds., Off Beat: Pluralizing Rhythm, (Thamyris/Intersecting, 2013), 135-48, https://doi.org/10.1163/9789401208871_010. See also Shintaro Miyazaki, “Urban Sounds Unheard-of: A Media Archaeology of Ubiquitous Infospheres,” Continuum: Journal of Media & Cultural Studies 27:4 (2013), 514–22, https://doi.org/10.1080/10304312.2013.803302. Miyazaki explains that “Algorhythms are vibrational, pulsed and rhythmized signals constituted both by transductions of physical fluctuations of energy and their oscillations as well as by abstract and logical structures of mathematic calculations. Algorhythms act in-between the … hardware and software” (519).
  9. Quoted in New York Medical Journal 55:17 (April 23, 1892), 475. Pleurisy, Hippocrates said, is indicated by “a creak like that of leather.”
  10. Clifford Allbutt and J. F. Payne, “The History of Medicine,” in Thomas Clifford Allbutt and Humphry Davy Rolleston, eds., A System of Medicine by Many Writers (MacMillan, 1905), 35-36.
  11. Jonathan Sterne, The Audible Past (Duke University Press, 2002), 119-20.
  12. Quoted in Paul Thagard, How Scientists Explain Disease (Princeton University Press, 1999), 145.
  13. Sterne, 99, 103, 122, 128, 131.
  14. Robert Hooke, “A General Scheme, or Idea of the Present State of Natural Philosophy…,” in Richard Waller, ed., The Posthumous Works of Robert Hooke (Samuel Smith and Benjamin Walford, 1705), 39-40; quoted in Stanley Joel Reiser, Medicine and the Reign of Technology (Cambridge University Press, 1978), 23.
  15. See Tom Rice, “Sounding Bodies: Medical Students and the Acquisition of Stethoscopic Perspectives” in Trevor Pinch and Karin Bijsterveld, eds., The Oxford Handbook of Sound Studies (Oxford University Press, 2012), 298-319; and Jacalyn Duffin, To See With a Better Eye: A Life of R. T. H. Laennec (Princeton University Press, 1999), 302.
  16. Sterne, 101.
  17. Some of the material in this section is adapted from Mattern, “Databodies in Codespace.” See also Giovanna Borasi and Mirko Zardini, “Demedicalize Architecture,” Places Journal, March 2012, https://doi.org/10.22269/120306; Thomas Fisher, “Frederick Law Olmsted and the Campaign for Public Health,” Places Journal, November 2010, https://doi.org/10.22269/101115; Richard Sennett, Flesh and Stone: The Body and the City in Western Civilization (Norton, 1994); and Sara Jensen Carr, The Topography of Wellness: Health and the American Urban Landscape (University of Virginia Press, forthcoming 2020).
  18. Jason Corburn, “Confronting Challenges in Reconnecting Urban Planning and Public Health,” American Journal of Public Health 94:4 (2004), 541-46, https://doi.org/10.2105/ajph.94.4.541. See also Jason Corburn, “Reconnecting Urban Planning and Public Health,” in Rachel Weber and Randall Crane, eds., The Oxford Handbook of Urban Planning (Oxford University Press, 2012), 404.
  19. For a discussion of Progressive Era noise debates, see Shannon Mattern, “Resonant Texts: Sounds of the American Public Library,” Senses & Society 2:3 (2007), 277-302, https://doi.org/10.2752/174589307X233521; Shannon Mattern, “Waves and Wires: Cities of Electric Sound,” Code and Clay, Data and Dirt: 5000 Years of Urban Media (University of Minnesota Press, 2017), 1-41; Emily Thompson, The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America, 1900 to 1930 (MIT Press, 2002); and the work of Peter Bailey, Karin Bijsterveld, Murray Schafer, Hillel Schwartz, Raymond Smilor, and Mark M. Smith.
  20. “Makes Quiet Zones for City Hospitals,” New York Times, June 24, 1907; and Thompson, 121.
  21. Karin Bijsterveld, Mechanical Sound: Technology, Culture and Public problems of Noise (MIT Press, 2008), 101.
  22. Karin Bijsterveld, “The Diabolical Symphony of the Mechanical Age: Technology and Symbolism of Sound in European and North American Noise Abatement Campaigns, 1900-40,” Social Studies of Science 31:1 (2001), 52, https://doi.org/10.1177/030631201031001003; Bijsterveld, Mechanical Sound, 108-10; and Karin Bijsterveld, “‘The City of Din’: Decibels, Noise, and Neighbors in the Netherlands, 1910-1980,” Osiris 18 (2003), 184, https://doi.org/10.1086/649383. See also Leonardo Cardoso, Sound-Politics in São Paolo (Oxford University Press, 2019), 50-54.
  23. Rogers H. Galt, “Results of Noise Surveys: Part I. Noise Out-of-Doors,” Journal of the Acoustical Society of America 2:30 (1930), https://doi.org/10.1121/1.1915233; Michael Mopas, “Howling Winds: Sound, Sense, and the Politics of Noise Regulation,” Canadian Journal of Law and Society 34:2 (2019), 307-25, https://doi.org/10.1017/cls.2019.19.
  24. Bijsterveld, 110.
  25. Cardoso, 54.
  26. Some sentences in this section are adapted from Shannon Mattern, “The Pulse of Global Passage: Listening to Logistics,” in Matthew Hockenberry, Nicole Starosielski, and Susan Zieger, eds., Assembly Codes: The Logistics of Media (Duke University Press, forthcoming 2020). See also Dietmar Offenhuber and Sam Auinger, “Politics of Sensing and Listening,” in Sergio M. Figueiredo, Sukanya Krishnamurthy, and Torsten Schroeder, eds., Architecture and the Smart City (Routledge, 2019).
  27. David Owen, “Is Noise Pollution the Next Big Public Health Crisis?,The New Yorker (May 6, 2019). See also the SONYC website.
  28. Juan P. Bello, Claudio Silva, Oded Nov, R. Luke DuBois, Anish Arora, Justin Salamon, Charles Mydlarz, and Harish Doraiswamy, “SONYC: A System for Monitoring, Analyzing, and Mitigating Urban Noise Pollution,” Communications of the ACM 62:2 (February 2019), 2, https://doi.org/10.1145/3224204; Mark C. Cartwright, Ana Elisa Mendez Mendez, Jason Cramer, Vincent Lostanlen, Graham Dove, Ho-Hsiang Wu, Justin Salamon, Oded Nov, and Juan Pablo Bello, “Sonyc Urban Sound Tagging (SONYC-UST), A Multilabel Dataset from an Urban Acoustic Sensor Network,” Proceedings of the Workshop on Detection and Classification of Acoustic Scenes and Events, New York (2019), https://doi.org/10.5281/zenodo.2590742.
  29. Claudio Coletta and Rob Kitchin, “Algorithmic Governance: Regulating the ‘Heartbeat’ of a City Using the Internet of Things,” Big Data & Society (2017), 11, https://doi.org/10.1177/2053951717742418. See also the Harmonica Index, which eschews the decibel and instead graphs noise peaks in relation to background noise as they vary throughout the day and night, and as listeners are more or less sensitive to sonic interruption; (YouTube).
  30. Bello, et al., 2.
  31. On performance-based planning and zoning, see Douglas C. Baker, Neil G. Sipe, and Brendan J. Gleeson, “Performance-Based Planning: Perspectives from the United States, Australia, and New Zealand,” Journal of Planning Education and Research 25 (2006), 396-409, https://doi.org/10.1177/0739456X05283450; Daniel Doctoroff, “The Shared City: How Technology Will Improve Urban Living,” MAS Summit, New York, NY, October 22–23, 2015; Sidewalk Labs, “Zoning: The Legal and Social Codes of Urban Planning,” September 21, 2017; and Luc Wilson, Jason Danforth, Dennis Harvey, and Nicolas LiCalzi, “Quantifying the Urban Experience: Establishing Criteria for Performance Based Zoning,” SimAUD, Delft (2017). Today, many cities combine objective and subjective measures in developing acoustic planning models and enforcing noise abatement policies: certain types of noise, or noises that produce certain effects, might be prohibited, along with noises that exceed a particular quantitative measurement level. See Michael Mopas, “Howling Winds: Sound, Sense, and the Politics of Noise Regulation,” Canadian Journal of Law and Society 34:2 (2019), 314, https://doi.org/10.1017/cls.2019.19. A noise complaint can be substantiated by a resident’s narrative testimony, e.g. that the neighbor’s thumping bass causes headaches and nausea, even if it doesn’t rate high on a decibel meter.
  32. “Panacousticon” is drawn from Peter Szendy’s All Ears: The Aesthetics of Espionage, trans. Roland Végsö (Fordham University Press, 2016). Thanks to Brian Miller and Julie Napolin for the reference.
  33. Mattern, “Databodies in Codespace.” See also Nelson Pacheco Rocha, Ana Dias, Gonçalo Santinha, Mário Rodrigues, Alexandra Queirós, and Carlos Rodrigues, “Smart Cities and Public Health: A Systematic Review,” Procedia Computer Science 164 (2019), 516-23, https://doi.org/10.1016/j.procs.2019.12.214; Susanne Moebus, Robynne Sutcliffe, Bryce Lawrence, Salman Ahmed, Timo Haselhoff, and Dietwald Gruehn, “Acoustic Quality and Health in Urban Environments: The SALVE Project,” Real Corp Proceedings (24th International Conference on Urban Planning and Regional Development in the Information Society), Karlsruhe, Germany, April 2-4, 2019.
  34. Björn W. Schuller, Dagmar M. Schuller, Kun Qian, Juan Liu, Huaiyuan Zheng, and Xiao Li, “COVID-19 and Computer Audition: An Overview on WhatSpeech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis,” March 24, 2020, Preprint.
  35. Kadija Ferryman and Mikaela Pitcan, “Fairness in Precision Medicine,” Data & Society Report (2018); Celia B. Fisher, “Will Research on 10,000 New Yorkers Fuel Future Racial Health Inequality?,” The Ethics and Society Blog, August 30, 2016.
  36. Tom Rice, “Listening,” in David Novak and Matt Sakakeeny, eds., Keywords in Sound (Duke University Press, 2015).
  37. See R. Murray Schafer, The Soundscape: Our Sonic Environment and the Tuning of the World (Destiny Books, 1994 [1977]); and Barry Truax, Handbook for Acoustic Ecology (ARC Publications, 1978). Parts of this section are drawn from Shannon Mattern, “Sonic Archaeologies,” in The Routledge Companion to Sound Studies, Michael Bull (Routledge, 2019), 222-30.
  38. See Leah Barclay, “Listening to Communities and Environments,” Contemporary Music Review 36:3 (2017), 143-58, https://doi.org/10.1080/07494467.2017.1395140; Zuzana Burivalova, Purnomo, Bambang Wahyudi, Timothy M. Boucher, Peter Ellis, Anthony Truskinger, Michael Towsey, Paul Roe, Delon Marthinus, Bronson Grisom, and Edward T. Game, “Using Soundscapes to Investigate Homogenization of Tropical Forest Diversity in Selectively Logged Forests,” Journal of Applied Ecology 56:11 (2019), 2493-2504, https://doi.org/10.1111/1365-2664.13481; Garth Paine, “Using the Sounds of Nature to Monitor Environmental Change,” Smithsonian Magazine, December 28, 2018; Rosamund Portus and Claire McGinn, “Bees, Extinction and Ambient Soundscapes: An Exploratory Environmental Communication Workshop,” Humanities 8 (2019), https://doi.org/10.3390/h8030153; Roberta Righini and Gianni Pavan, “A Soundscape Assessment of the Sasso Fratino Integral Nature Reserve in the Central Apennines, Italy,” Biodiversity (2019), https://doi.org/10.1080/14888386.2019.1696229; and Antonella Radicchi’s research on urban soundscapes.
  39. Lefebvre, 19, 88. Sara Adhitya proposes that Lefebvre’s rhythmanalysis was “seen as a form of psychoanalysis and pathology. Listening inwards, like a physician, the rhythmanalyst could diagnose which bodily rhythms were malfunctioning in the event of illness; listening outwards, using our eyes, ears, memory and heart as a measure, one could diagnose our urban rhythms in a similar way. However, to Lefebvre, this meant more than simply listening to the urban soundscape: one had to listen to ‘a house, a street, a town as one listens to a symphony, an opera.’ Through understanding the rhythmic role each urban element has to play in the overall composition of the city, arrhythmia in our urban environments could also be identified.” Sara Adhitya, Musical Cities (UCL Press, 2017), 15. See also Nina Hällgren, “Designing With Urban Sound: Exploring Methods for Qualitative Sound Analysis of the Built Environment,” Thesis, KTH School of Architecture and the Build Environment, Konstfack University College of Art, Crafts and Design (2019); and Invisible Places: Sound, Urbanism, and Sense of Place, São Miguel Island, Azores, Portugal, April 7-9, 2017.
  40. Lefebvre, 22.
  41. Karin Bijsterveld, Sonic Skills: Listening for Knowledge in Science, Medicine and Engineering (1920s – Present) (Palgrave Macmillan, 2019), 77. See also Karin Bijsterveld, Sound and Safe: A History of Listening Behind the Wheel (Oxford University Press, 2014); Stefan Krebs, “‘Sobbing, Whining, Rumbling’: Listening to Automobiles as Social Practice,” in Trevor Pinch and Karin Bijsterveld, eds., The Oxford Handbook of Sound Studies (Oxford University Press, 2012), 94; and Shannon Mattern, “Things That Beep: A Brief History of Product Sound Design,” Avant (August 22, 2018).
  42. See Shannon Mattern, “Maintenance and Care,” Places Journal, November 2018, https://doi.org/10.22269/181120.
  43. Stephane Hans, Claude Boutin, Erdin Ibraim, and Pierre Roussillon, “Dynamic Auscultation of Buildings and Seismic Integrity Threshold Assessment,” First European Conference on Earthquake Engineering and Seismology, Geneva, Switzerland, September 3-8, 2006; Jean-Paul Kurtz, Dictionary of Civil Engineering (Kluwer Academic Publishers, 2004), 46-47; F. Lamas-Lopez, Y.J. Cui, S. Costa C’Aguir, N. Calon, “Geotechnical Auscultation of a French Conventional Railway Track-Bed for Maintenance Purposes,” Soils and Foundations 56:2 (April 2016), 240-50; “Listening to Infrastructure”; and Stuart Nathan, “Soil Squeaks Give Early Warning of Infrastructure Collapse,” The Engineer, October 23, 2019.
  44. See Shannon Mattern “SoundMatter,” “No Thing Unto itself: Object-Oriented Politics,” CUNY Graduate Center (October 20, 2011); and Kurt Anderson, “The Sounds of the World Trade Center,Studio 360 September 2, 2011.
  45. Christina Kubisch, “Electrical Walks: Electromagnetic Investigations in the City”; and Miyazaki, “Urban Sounds Unheard-of.”
  46. John Mannes, “The Sound of Impending Failure,” Tech Crunch, January 29, 2017; and Ben Popper, “Listening to Machines to Understand Why They Break,” The Verge, January 11, 2017. See also the company Augury, which “combines the foundations of asset performance management (APM) and predictive maintenance (PdM) with the most recent advances in sensor technology” — including vibration, ultrasonic, temperature, and magnetic sensors — and artificial intelligence. “Machine learning algorithms compare your machine data to tens of thousands of recordings in our ever-growing database to detect anomalies and diagnose equipment malfunctions.”
  47. Xiaochang Li and Mara Mills, “Vocal Features: From Voice Identification to Speech Recognition by Machine,” Technology and Culture 60:2 (April 2019), S129-60, https://doi.org/10.1353/tech.2019.0066; Michael Dumiak, “Interpol’s New Software Will Recognize Criminals by Their Voices,” IEEE Spectrum, May 16, 2018; Ryan Gallagher, “Watch Your Tongue: Law Enforcement Speech Recognition Systems Stores Millions of Voices,” Slate, September 20, 2012; and Echo Huang, “After Faces, China is Moving Quickly to Identify People by Their Voices,” Quartz, March 20, 2018.
  48. Graeme Wood, “The Refugee Detectives,” The Atlantic, April 2018; Amar Toor, “Germany To Use Voice Analysis Software to Determine Where Refugees Come From,” The Verge, March 17, 2017; and Emily Apter, “Shibboleth: Policing by Ear and Forensic Listening in Projects by Lawrence Abu Hamdan,” October 156 (Spring 2016), 102, https://doi.org/10.1162/OCTO_a_00253. See also Brian Hochman’s work on the history of wiretapping and eavesdropping, including Brian Hochman, “Eavesdropping in the Age of The Eavesdroppers; or, The Bug in the Martini Olive,” Post45 (February 3, 2016).
  49. Lawrence Abu Hamdan, “Aural Contract: Forensic Listening and the Reorganization of the Speaking-Subject,” in Cesura / Acceso 1 (October 2014), 205, 214-15. See also Brian House, “Machine Listening: Wavenet, Media Materialism, and Rhythmanalysis,” APRJA 6:1 (2017).
  50. Jack Gillum and Jeff Kao, “Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students,” ProPublica, June 25, 2019; Sound Intelligence, Aggression Detection; and Louroe Electronics. See also Sean Dockray, “Learning from YouTube,” Rivers of Emotion, Bodies of Ore (No Press / Kunsthall Trondheim, 2018); Haus der Kulturen der Welt, “Hito Steyerl: The Language of Broken Glass,” YouTube, February 26, 2019; and Jorge Roa, Leandro Gallino, Guillermo Jacob, and Patrick K. Hung, “Towards Smart Citizen Security Based on Speech Recognition,” Congreso Argentino de Ciencias de la Informática y Desarrollos de Investigación (2018), https://doi.org/10.1109/CACIDI.2018.8584192. For more on gunshot detection, see Leonardo Cardoso, “Translations and Translation Gaps: The Gunshot Acoustic Surveillance Experiment in Brazil,” Sound Studies 5 (2019), https://doi.org/10.1080/20551940.2018.1564495.
  51. Gillum and Kao, “Aggression Detectors.”
  52. Sarah Barns, “Responsive Listening: Negotiating Cities of Sirens, Smartphones…,” in Milena Droumeva, Randolph Jordan, eds., Sound, Media, Ecology (Palgrave Macmillan, 2019), 227. See also the work of Antonella Radicchi and Australia’s National Acoustic Observatory Project, as presented in Lexy Hamilton-Smith, “Acoustic Observatory Will Record ‘Galaxy of Sounds’ to Help Scientists Monitor Australian Wildlife,” ABC News, November 26, 2019. Thanks to Rowan Wilken for the observatory reference.
  53. I’m grateful to all the folks on Twitter who responded to my request (December 2, 2019) for artists who work in critical AI.
  54. Alison J. Fairbrass, Michael Firman, Carol Williams, Gabriel J. Brostow, Helena Titheridge, and Kate E. Jones, “CityNet – Deep Learning Tools for Urban Ecoacoustic Assessment,” Methods in Ecology and Evolution 10 (2019), 186-97, https://doi.org/10.1111/2041-210X.13114; Alice Eldridge, Michael Casey, Paola Moscoso, Mike Peck, and N. Morales, “Toward the Extraction of Ecologically-Meaningful Soundscape Objects: A New Direction for Soundscape Ecology and Rapid Acoustic Biodiversity Assessment,” International Workshop on Big Data Sciences for Bioacoustic Environmental Survey, (2015); and Alice Eldridge and Chris Kiefer, “Toward a Synthetic Acoustic Ecology: Sonically Situated, Evolutionary Agent Based Models of the Acoustic Niche Hypothesis,” Published in Proceedings of the Artificial Life Conference, Tokyo, Japan, July 23 – 27, 2018 (MIT Press, 2018). Thanks to Ezra Teboul for the latter reference.
  55. Alison Gow, “Using Google DNI Funding to Help Make Sensory Data a Practical Tool for Journalists,” Behind Local News, July 28, 2018. Thanks to Clare Cook for the reference.
  56. Amanda Petrusich, “Julianna Barwick is Using the New York Sky to Make Music,” The New Yorker, April 22, 2019; and Matt McDermott, “Julianna Barwick Used Generative Music Technology to Make Her New Album, Circumstance Synthesis,” Resident Advisor, December 3, 2019. Thanks to Sarah Hamerman for this reference. See also Henry Cooke’s composition for an Amazon Alexa orchestra: Henry Cooke, “I Am Running in the Cloud,” Github, February 17, 2018; “In Bb 2.0 (for 8 Echoes) – Binaural Sondtrack,” YouTube, January 13, 2018. Thanks to Kim Plowright for the reference.
  57. Jonny Sun, Hannah Davis, and Christopher Sun, “The Laughing Room,” metaLAB(at)Harvard; and Everybody House Games, “Hey Robot.” Thanks to Tega Brain and Luming Hao for these references.
  58. See the work of Ritwik Banerji and Juliana Friend, “Programming Improvisation,” Culanth, August 19, 2019.
Cite
Shannon Mattern, “Urban Auscultation; or, Perceiving the Action of the Heart,” Places Journal, April 2020. Accessed 28 Mar 2024. https://doi.org/10.22269/200428

If you would like to comment on this article, or anything else on Places Journal, visit our Facebook page or send us a message on Twitter.