Skip to main content
Humanities LibreTexts

1.13: Sound Design and Equipment

  • Page ID
    11184
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Sound designers are masters of the aural arts. They are responsible for far more than preshow music and doorbells; they decide what music, effects, and vocal microphones are heard and from where. Most modern sound designers are also composers who create the sounds and write the incidental music required for their productions. When commercial recordings are used in productions that will charge for tickets, the producing organization is required to secure the rights to use that music. The high cost of acquiring these rights plus the development of digital recording technology that has put powerful tools for creating and augmenting audio files into the hands of average people has allowed designers to shift from locating appropriate recordings for use to writing original scores. This, coupled with the fact that sound and music can add so much depth to a production, has made sound design a major component of modern theatre.

    Sound designers’ work falls into several categories. The three main categories are music, reinforcement, and effects. Music may include live musicians as well as recorded music. Music may be a primary element to a scene or may provide an underscore to enhance emotional content. Live musicians require a sound designer to balance the relatively loud sounds of the instruments against the actors and other elements that need to be heard clearly by the audience. In modern musical theatre productions, the orchestra may be in an orchestra pit, another room, or even in another building. Capturing that sound and broadcasting it into an auditorium may become a part of a sound designer’s work. Such work bridges into the category of reinforcement, which also includes the sound from actors who wear a microphone during a performance. While microphones are common to musical productions, even some plays performed in larger auditoriums use microphones to pick up the actors voices and broadcast them throughout the theatre space. In this instance, the sound designer strives to have the amplified sound seem as natural as possible to the audience.

    The effects category covers any recorded sounds that are played for an audience, including atmospheric sounds, weather sounds, and practical effects. Most theatres are now equipped with a computer program designed to play sounds for live entertainment. These systems allow the cues, volume levels, and routing of the sound to

    be entered into a playlist-style cue stack. A sound technician can then press a single “Go” button to operate the entire show in sequence. These programs often are also capable of controlling video playback and, in some cases, can be linked to the lighting control console to trigger cues in sequence. Such a program can allow a single technician to run sound, video, and lights from a single cue stack.

    Sound designers must also have a deep understanding of their equipment in order to design effectively. All sound systems are composed of the same basic equipment. The sound is routed through this system and processed so it can be delivered to the ears of the audience. The path the sound follows is called the signal path or signal chain. This chain can be thought of as a series of links that must be connected properly for the chain to be functional. The basic components of a sound system are inputs, the pre-amp, the mixer, signal processor(s), amplifiers, and speakers.

    Input or source devices are playback devices such as a computer, cell phone, CD player, tape deck, or microphone. If your input source is anything other than a microphone, it is likely that it is sending out an electronic signal at a signal strength known as line level. Computers, cell phones, and CD players often have a headphone jack. These jacks operate at this line level signal strength. This signal strength is enough to power the tiny speakers in your earbuds, but if you want to fill the room with sound, you will probably need an amplified speaker because the line level signal does not have enough electrical power to physically push a larger speaker to create the sound waves for you to hear.

    Microphones operate on an even lower power signal we call mic level. Mic level signals are not even strong enough to power your earbuds because of the way a microphone picks up sounds. A microphone is a device known as a transducer. Its job is to capture a physical sound (a pressure wave) and transduce it into an electronic signal. Most microphones operate by using a diaphragm to absorb the pressure waves of live sound. The diaphragm is connected to an electronic or magnetic field, which is affected by the diaphragm’s movement in response to the pressure waves. The difference in the affected electronic field is recorded as an electronic waveform (the sound signal) and is then processed and recreated by a speaker (another transducing element) back to a physical pressure wave that our ears can hear. Due to the relatively tiny differences in the electronic field of the microphone, the strength of this interpreted signal is low. Microphone signals must be run-through a pre-amp to boost them up to a level that can withstand any electric noise it might be subjected to as it runs through cables and equipment on its way through the signal chain.

    Input devices are connected to a mixer and then to signal processing equipment. The mixer is key to both routing the signal to the correct devices/outputs and to adjusting input signals for that processing. Processing units include graphic equalizers and effect-based units such as echo and reverb processors, which are devices that alter or augment the signal itself. Most mixers have built in pre-amp functions for mic level signals. Once a signal has been mixed and processed, it is routed to an amplifier to further boost its signal power. The signal must be strong enough when it reaches

    a speaker to not just preserve itself against electronic noise, but to do the work of moving the speakers back and forth to reproduce the physical pressure waves that our ears can hear. The amplifier boosts up the signal to accomplish this work. Speakers are connected directly to the amplifiers, which are where the signal chain ends.

    In most theatres this system is already in place, though the sound designer still chooses how it will be operated, the quality of the sound that will be produced, and where the speakers will be located. It is common in modern theatres for speakers to be built in to many locations around the auditorium so that sound is both directional and can be played at lower volumes, which often seems more realistic. The sound designer, in addition to creating the sound recording medium, also creates a plot of speaker locations and how the cables are to be routed to those speakers. It is common for directors to request the use of sound in the rehearsal room prior to technical rehearsals, and so a mini sound system may be required to facilitate that playback.

    It is probably not surprising that there are many varieties of all of these devices available for use. A sound designer must be familiar with many pieces of sound equipment to choose what works best for the needs of a given production.

    There’s More to Know

    Microphones come in a variety of styles, and each has its own specialized use. Dynamic microphone: Durable and inexpensive microphone for voice and instruments. Omni directional microphone: Usually used in recording studios, these microphones pickup sounds from all directions. Condenser microphone: A battery powered microphone usually used in recording studios, that is quite delicate and sensitive. Shotgun microphone: A microphone capable of picking up sounds from some distance. Wireless hand-held microphone: A hand-held microphone that sends its signal wirelessly to a remote receiver. Wireless lavaliere/body microphone: A small, pin-on microphone that sends its signal wirelessly to a remote receiver, sometimes taped onto the skin or woven into the hair of a performer. PZM/PCC microphone: Boundary microphones often placed at the edge of a stage deck to pick up reflected sounds.

    In addition to capturing sounds with microphones, sound designers must also provide monitor feeds of sound to orchestras and singers so they can keep in tune and

    time with one another. It may be that an individual performer does not need or want to hear the entire sound scape, but just a part of it. A singer may only need to hear the piano to sing their part and not the drums. In musical theatre, the same performers wearing a wireless body microphone may also wear a wireless in-ear monitor. These devices and their microphone counterparts work essentially like a mini radio station broadcasting from one end and receiving the signal at the other. Because a production may have many of these wireless devices working at one time, they must all be broadcasting on separate frequencies to avoid overlap. The frequencies of these devices are in the same range as those walkie-talkies, police radios, and other public broadcasters operate within. It is imperative that all frequencies are checked for cross-traffic at each venue.

    There’s More to Know

    Stereo vs. monaural (mono) signals: In a monophonic, or “monaural,” system a single channel carries the audio signal. Thus, each speaker in the system receives the same signal information. This can help provide clarity in large systems with many speakers. Stereo, or stereophonic, systems send two independent audio channels played through two speakers (or sets of speakers). This allows the system to reproduce an image of sound in the room by manipulating the specific level and phase of each signal.

    Every theatre space has its own natural sound. Some have an obvious echo, or a slap-back effect that doubles voices. Some are warm sounding and perfect for the unamplified voice. Others are better for music and muddy voices. Many of the acoustical properties of any room are created by the room’s architecture and the soft and hard surfaces the sound encounters. The pressure waves that create sound are easily bounced off hard surfaces and absorbed by softer surfaces. It is common for sound levels to be set for a production in an empty auditorium during technical rehearsals and then found to be too soft when the theatre fills up with bodies that not only absorb sound, but also make quite a bit of noise themselves, even just by breathing and shifting in their seats. Sound designers must listen to each room they work in to overcome its particular acoustical difficulties.

    No matter what equipment makes up the sound system, a series of devices need to be interconnected to complete the signal path. A series of sound specific cables connect the devices. Sound cable falls into two major categories: balanced and unbalanced. As we have learned, we must use a stronger signal than a mic level to process that signal through the system to keep electronic noise from affecting its quality. All of our connector cables are running in, around, and through powered devices. Any leaked electrical power can muddy a sound signal, and so using shielded cables in good repair and with tight solid connections can help keep the signal clear. A signal to noise ratio is often mentioned when talking about sound routing. You want the strongest signal with the least noise possible. When it comes to connecting cables,

    a balanced cable will do a better job of keeping out electronic noise than an unbalanced cable. The difference between the two is easy to detect. A balanced cable has three wires to transport the signal, while an unbalanced cable has only two. In an unbalanced cable, one wire carries part of the electronic signal and also doubles as the shielding for the core wire. The shielding wire’s job is to channel any electronic noise away from the wire carrying the sound signal. A balanced cable sends the signal out on two wires; one of them carries a signal that has its phase inverted. This allows for the two signals running parallel to one another to directly cancel each other’s potential to leak, resulting in less signal interference. The third wire is then free to shield the signal wires and ground out electronic interference. It makes sense to always use a balanced cable when you can to help protect your signal.

    techtheatrem13i1.png

    Common connectors Balanced = XLR and ¼” if TRS (tip, ring, sleeve) Unbalanced = ¼” if TS (tip, sleeve), RCA, and common 3.5mm “mini jack”

    The speaker wires that connect speakers to amplifiers are not balanced cables due to the high strength of the electronic signal they carry, so they can still be affected by ambient electrical noise. Care should be taken to ensure that these cables are never routed alongside power extension cords or lighting cables, which can result in a buzz in the speakers. If the cables must cross, make sure they cross each other at a 90-degree angle and do not run along each other’s length.

    For Further Exploration

    Gillette, J. Michael. 2007. Theatrical Design and Production: An Introduction to Scene Design and Construction, Lighting, Sound, Costume, and Makeup. Boston: McGraw-Hill. Rossing, Thomas D., F. Richard Moore, and Paul A. Wheeler. 2001. The Science of Sound, 3rd Edition. San Francisco: Pearson. “Association of Sound Designers ” n.d. Accessed August 16, 2018. http://associationofsounddesigners.com/.


    This page titled 1.13: Sound Design and Equipment is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Tal Sanders (Tualatin Books (imprint of Pacific University Press)) .

    • Was this article helpful?