Archive for September, 2008

Sound Effect

In film
In the context of motion pictures and television, sound effects refers to an entire hierarchy of sound elements, whose production encompass many different disciplines, including:
Hard sound effects are common sounds that appear on screen, such as door slams, weapons firing, and cars driving by.
Background (or BG) sound effects are sounds that do not explicitly synchronize with the picture, but indicate setting to the audience, such as forest sounds, the buzzing of fluorescent lights, and car interiors. The sound of people talking in the background is also considered a “BG,” but only if the speaker is unintelligible and the language is unrecognizable (this is known as walla). These background noises are also called ambience or atmos (“atmosphere”).
Foley sound effects are sounds that synchronize on screen, and require the expertise of a foley artist to record properly. Footsteps, the movement of hand props (e.g., a tea cup and saucer), and the rustling of cloth are common foley units.
Design sound effects are sounds that do not normally occur in nature, or are impossible to record in nature. These sounds are used to suggest futuristic technology in a science fiction film, or are used in a musical fashion to create an emotional mood.
Each of these sound effect categories are specialized, with sound editors known as specialists in an area of sound effects (e.g. a “Car cutter” or “Guns cutter”).
The process can be separated into two steps: the recording of the effects, and the processing. Large libraries of commercial sound effects are available to content producers (such as the famous Wilhelm scream), but on large projects sound effects may be custom-recorded for the purpose.
Although effects libraries may contain every effect a producer requires, they are seldom in correct sequence and never in the required time frame. In the early days of film and radio, library effects were held on analogue discs and an expert technician could play six effects, on six turntables, in five seconds. Today, with effects held in digital format, it is easy to create any required sequence to be played in any desired timeline.
Also, if the soundtrack is processed through a foley, it can make the smallest sound look perfect on screen and the audience can never guess how much work went into the making of that specific sound.

Video Games
The principles involved with modern video game sound effects (since the introduction of sample playback) are essentially the same as those of motion pictures. Typically a game project requires two jobs to be completed: sounds must be recorded or selected from a library and a sound engine must be programmed so that those sounds can be incorporated into the game’s interactive environment.

In earlier computers and video game systems, sound effects were typically produced using sound synthesis. In modern systems, the increases in storage capacity and playback quality has allowed sampled sound to be used. The modern systems also frequently utilize positional audio, often with hardware acceleration, and real-time audio post-processing, which can also be tied to the 3D graphics development. Based on the internal state of the game, multiple different calculations can be made. This will allow for, for example, realistic sound dampening, echoes and doppler effect.

Historically the simplicity of game environments reduced the required number of sounds needed, and thus only one or two people were directly responsible for the sound recording and design. As the video game business has grown and computer sound reproduction quality has increased, however, the team of sound designers dedicated to game projects has likewise grown and the demands placed on them may now approach those of mid-budget motion pictures.
Recording
The most realistic sound effects originate from original sources; the closest sound to machine-gun fire that we can replay is an original recording of actual machine guns. Less realistic sound effects are digitally synthesized or sampled and sequenced (the same recording played repeatedly using a sequencer). When the producer or content creator demands high-fidelity sound effects, the sound editor usually must augment his available library with new sound effects recorded in the field.
When the required sound effect is of a small subject, such as scissors cutting, cloth ripping, or footsteps, the sound effect is best recorded in a studio, under controlled conditions. Such small sounds are often delegated to a foley artist and foley editor. Many sound effects cannot be recorded in a studio, such as explosions, gunfire, and automobile or aircraft maneuvers. These effects must be recorded by a sound effects editor or a professional sound effects recordist.
When such “big” sounds are required, the recordist will begin contacting professionals or technicians in the same way a producer may arrange a crew; if the recordist needs an explosion, he may contact a demolition company to see if any buildings are scheduled to be destroyed with explosives in the near future. If the recordist requires a volley of cannon fire, he may contact historical re-enactors or gun enthusiasts. People are often excited to participate in something that will be used in a motion picture, and love to help.
Depending on the effect, recordists may use several DAT, hard disk, or Nagra recorders and a large number of microphones. During a cannon- and musket-fire recording session for the 2003 film The Alamo, conducted by Jon Johnson and Charles Maynes, two to three DAT machines were used. One machine was stationed near the cannon itself, so it could record the actual firing. Another was stationed several hundred yards away, below the trajectory of the ball, to record the sound of the cannonball passing by. When the crew recorded musket-fire, a set of microphones were arrayed close to the target (in this case a swine carcass) to record the musket-ball impacts.
A counter-example is the common technique for recording an automobile. For recording “Onboard” car sounds (which include the car interiors), a three-microphone technique is common. Two microphones record the engine directly: one is taped to the underside of the hood, near the engine block. The second microphone is covered in a wind screen and tightly attached to the rear bumper, within an inch or so of the tail pipe. The third microphone, which is often a stereo microphone, is stationed inside the car to get the car interior.
Having all of these tracks at once gives a sound designer or mixer a great deal of control over how he wants the car to sound. In order to make the car more ominous or low, he can mix in more of the tailpipe recording; if he wants the car to sound like it is running full throttle, he can mix in more of the engine recording and reduce the interior perspective. In cartoons, a pencil being dragged down a washboard may be used to simulate the sound of a sputtering engine. What we would consider today to be the first recorded sound effect was of Big Ben striking 10:30, 10:45, and 11:00. It was recorded on a brown wax cylinder by technicians at Edison House in London. It was recorded July 16, 1890. This recording is currently in the public domain.
Processing Effects

As the car example demonstrates, the ability to make multiple simultaneous recordings of the same subject—through the use of several DAT or multitrack recorders—has made sound recording into a sophisticated craft. The sound effect can be shaped by the sound editor or sound designer, not just for realism, but for emotional effect.
Once the sound effects are recorded or captured, they are usually loaded into a computer integrated with an audio non-linear editing system. This allows a sound editor or sound designer to heavily manipulate a sound to meet his or her needs.
The most common sound design tool is the use of layering to create a new, interesting sound out of two or three old, average sounds. For example, the sound of a bullet impact into a pig carcass may be mixed with the sound of a melon being gouged to add to the “stickiness” or “gore” of the effect. If the effect is featured in a close-up, the designer may also add an “impact sweetener” from his or her library. The sweetener may simply be the sound of a hammer pounding hardwood, equalized so that only the low-end can be heard. The low end gives the three sounds together added weight, so that the audience actually “feels” the weight of the bullet hit the victim.
If the victim is the villain, and his death is climactic, the sound designer may add reverb to the impact, in order to enhance the dramatic beat. And then, as the victim falls over in slow motion, the sound editor may add the sound of a broom whooshing by a microphone, pitch-shifted down and time-expanded to further emphasize the death. If the film is science-fiction, the designer may phaser the “whoosh” to give it a more sci-fi feel. (For a list of many sound effects processes available to a sound designer, see the bottom of this article.)
Aesthetics

When creating sound effects for films, sound recordists and editors do not generally concern themselves with the verisimilitude or accuracy of the sounds they present. The sound of a bullet entering a person from a close distance may sound nothing like the sound designed in the above example, but since very few people are aware of how such a thing actually sounds, the job of designing the effect is mainly an issue of creating a conjectural sound which feeds the audience’s expectations while still suspending disbelief.
In the previous example, the phased ‘whoosh’ of the victim’s fall has no analogue in real life experience, but it is emotionally immediate. If a sound editor uses such sounds in the context of emotional climax or a character’s subjective experience, they can add to the drama of a situation in a way visuals simply cannot. If a visual effects artist were to do something similar to the ‘whooshing fall’ example, it would probably look ridiculous or at least excessively melodramatic.
The “Conjectural Sound” principle applies even to happenstance sounds, such as tires squealing, doorknobs turning or people walking. If the sound editor wants to communicate that a driver is in a hurry to leave, he will cut the sound of tires squealing when the car accelerates from a stop; even if the car is on a dirt road, the effect will work if the audience is dramatically engaged. If a character is afraid of someone on the other side of a door, the turning of the doorknob can take a second or more, and the mechanism of the knob can possess dozens of clicking parts. A skillful Foley artist can make someone walking calmly across the screen seem terrified simply by giving the actor a different gait.
Techniques
In music and film/television production, typical effects used in recording and amplified performances are:
echo – to simulate the effect of reverberation in a large hall or cavern, one or several delayed signals are added to the original signal. To be perceived as echo, the delay has to be of order 50 milliseconds or above. Short of actually playing a sound in the desired environment, the effect of echo can be implemented using either digital or analog methods. Analog echo effects are implemented using tape delays and/or spring reverbs. When large numbers of delayed signals are mixed over several seconds, the resulting sound has the effect of being presented in a large room, and it is more commonly called reverberation or reverb for short.
flanger – to create an unusual sound, a delayed signal is added to the original signal with a continuously-variable delay (usually smaller than 10 ms). This effect is now done electronically using DSP, but originally the effect was created by playing the same recording on two synchronized tape players, and then mixing the signals together. As long as the machines were synchronized, the mix would sound more-or-less normal, but if the operator placed his finger on the flange of one of the players (hence “flanger”), that machine would slow down and its signal would fall out-of-phase with its partner, producing a phasing effect. Once the operator took his finger off, the player would speed up until its tachometer was back in phase with the master, and as this happened, the phasing effect would appear to slide up the frequency spectrum. This phasing up-and-down the register can be performed rhythmically.
phaser – another way of creating an unusual sound; the signal is split, a portion is filtered with an all-pass filter to produce a phase-shift, and then the unfiltered and filtered signals are mixed. The phaser effect was originally a simpler implementation of the flanger effect since delays were difficult to implement with analog equipment. Phasers are often used to give a “synthesized” or electronic effect to natural sounds, such as human speech. The voice of C-3PO from Star Wars was created by taking the actor’s voice and treating it with a phaser.
chorus – a delayed signal is added to the original signal with a constant delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a flanging effect. Often, the delayed signals will be slightly pitch shifted to more realistically convey the effect of multiple voices.
equalization – different frequency bands are attenuated or boosted to produce desired spectral characteristics. Moderate use of equalization (often abbreviated as “EQ”) can be used to “fine-tune” the tone quality of a recording; extreme use of equalization, such as heavily cutting a certain frequency can create more unusual effects.
filtering – Equalization is a form of filtering. In the general sense, frequency ranges can be emphasized or attenuated using low-pass, high-pass, band-pass or band-stop filters. Band-pass filtering of voice can simulate the effect of a telephone because telephones use band-pass filters.
overdrive effects such as the use of a fuzz box can be used to produce distorted sounds, such as for imitating robotic voices or to simulate distorted radiotelephone traffic (e.g., the radio chatter between starfighter pilots in the science fiction film Star Wars). The most basic overdrive effect involves clipping the signal when its absolute value exceeds a certain threshold.
pitch shift – similar to pitch correction, this effect shifts a signal up or down in pitch. For example, a signal may be shifted an octave up or down. This is usually applied to the entire signal, and not to each note separately. One application of pitch shifting is pitch correction. Here a musical signal is tuned to the correct pitch using digital signal processing techniques. This effect is ubiquitous in karaoke machines and is often used to assist pop singers who sing out of tune. It is also used intentionally for aesthetic effect in such pop songs as Cher’s Believe and Madonna’s Die Another Day.
time stretching – the opposite of pitch shift, that is, the process of changing the speed of an audio signal without affecting its pitch.
resonators – emphasize harmonic frequency content on specified frequencies.
robotic voice effects are used to make an actor’s voice sound like a synthesized human voice.
synthesizer – generate artificially almost any sound by either imitating natural sounds or creating completely new sounds.
modulation – to change the frequency or amplitude of a carrier signal in relation to a predefined signal. Ring modulation, also known as amplitude modulation, is an effect made famous by Doctor Who’s Daleks and commonly used throughout sci-fi.
compression – the reduction of the dynamic range of a sound to avoid unintentional fluctuation in the dynamics. Level compression is not to be confused with audio data compression, where the amount of data is reduced without affecting the amplitude of the sound it represents.
3D audio effects – place sounds outside the stereo basis
reverse echo – a swelling effect created by reversing an audio signal and recording echo and/or delay whilst the signal runs in reverse. When played back forward the last echos are heard before the effected sound creating a rush like swell preceding and during playback.

Sound effects or audio effects are artificially created or enhanced sounds, or sound processes used to emphasize artistic or other content of films, television shows, live performance, animation, video games, music, or other media.
In motion picture and television production, a sound effect is a sound recorded and presented to make a specific storytelling or creative point without the use of dialogue or music. The term often refers to a process applied to a recording, without necessarily referring to the recording itself.In professional motion picture and television production, dialogue, music, and sound effects recordings are treated as separate element. Dialogue and music recordings are never referred to as sound effects, even though the processes applied to them, such as reverberation or flanging effects, often are called “sound effects”.

Tinggalkan sebuah Komentar

Production Sound Mixer

A production sound mixer or location sound recordist is the member of a film crew responsible for recording all sound and sound effects on set during the photography of a motion picture, for later inclusion in the finished product, or for reference to be used by the sound designer, sound effects editors, or foley artists. This requires choice and deployment of microphones, choice of recording media, and mixing of audio signals in real time.

Usually, the recordist will arrive on location with his/her own equipment, which normally includes microphones, radio systems, booms, mixing desk, audio storage, headphones, cables, tools, and a small amount of stationery for making notes and logs. The recordist may be asked to capture a wide variety of sound on location, and must also consider the format of the finished product (mono, stereo or surround sound). The recorded production sound track is later combined with other elements or re-recorded by automatic dialogue replacement.

Often when filming on video, the sound recordist may record audio directly onto the camera rather than use a separate medium, although a separate copy is often made, as it both provides an extra copy which may have more tracks and also may include other sound captured without the camera.

The sound mixer is considered a department head, and is thus completely responsible for all aspects of production sound including the hiring of a boom operator and utility sound technician, planning the technical setup involving sound including both sound equipment and ancillary devices involved in syncing and time offsets, anticipating and discussing sound-related problems with the rest of the crew, and ordering and preparing the sound equipment to be used on the set.

Tinggalkan sebuah Komentar

Audio Engineering

Audio engineering is a part of audio science dealing with the recording and reproduction of sound through mechanical and electronic means. The field draws on many disciplines, including electrical engineering, acoustics, psychoacoustics, and music. Unlike acoustical engineering, audio engineering generally does not deal with noise control or acoustical design. However, an audio engineer is often closer to the creative and technical aspects of audio rather than formal engineering. An audio engineer must be proficient with different types of recording media, such as analog tape, digital multitrack recorders and workstations, and computer knowledge. With the advent of the digital age, it is becoming more and more important for the audio engineer to be versed in the understanding of software and hardware integration from synchronization to analog to digital transfers.Lexical dispute

The expressions “audio engineer” and “sound engineer” are ambiguous. Such terms can refer to a person working in sound and music production, as well as to an engineer with a degree who designs professional equipment for these tasks. The latter professional often develops the tools needed for the former’s work. Other languages, such as German and Italian, have different words to refer to these two activities. For instance, in German, Tontechniker (audio technician) is the one who operates the audio equipment, and Toningenieur (audio engineer) is the one who designs, builds and repairs it.

Individuals who design acoustical simulations of rooms, shaping algorithms for digital signal processing and computer music problems, perform institutional research on sound, and other advanced fields of audio engineering are most often graduates of an accredited college or university, or have passed a difficult civil qualification test.
Practitioners
An engineer at one of the audio consoles of the Danish broadcasting corporation, Danmarks Radio. The console is an NP-elektroakustik specially made for Danmarks Radio in the 1980s.

An audio engineer is someone with experience and training in the production and manipulation of sound through mechanical (analog) or digital means. As a professional title, this person is sometimes designated as a sound engineer or recording engineer instead. A person with one of these titles is commonly listed in the credits of many commercial music recordings (as well as in other productions that include sound, such as movies).

Audio engineers are generally familiar with the design, installation, and/or operation of sound recording, sound reinforcement, or sound broadcasting equipment, including large and small format consoles. In the recording studio environment, the audio engineer records, edits, manipulates, mixes, and/or masters sound by technical means in order to realize an artist’s or record producer’s creative vision. While usually associated with music production, an audio engineer deals with sound for a wide range of applications, including post-production for video and film, live sound reinforcement, advertising, multimedia, and broadcasting. When referring to video games, an audio engineer may also be a computer programmer.

In larger productions, an audio engineer is responsible for the technical aspects of a sound recording or other audio production, and works together with a record producer or director, although the engineer’s role may also be integrated with that of the producer. In smaller productions and studios the sound engineer and producer is often one and the same person.

In typical sound reinforcement applications, audio engineers often assume the role of producer, making artistic decisions along with technical ones.

[edit]
Different professional branches
Studio engineer could be either a sound engineer working in a studio together with a producer, or a producing sound engineer working in a studio.
Recording engineer is a person who records sound.
Mixing engineer is a person who creates mixes of already recorded materials. It is not uncommon for a commercial record to be recorded at one studio and later mixed by different engineers in other studios.
Mastering engineer is a person who uses the mix to create the master that is replicated for distribution.
Game audio designer engineer is a person who deals with sound aspects of game development.
Live sound engineer is a person dealing with live sound reinforcement. This usually includes planning and installation of speakers etc and soundmixing during the show. This may or may not include running the foldback sound.
Foldback or monitor engineer is a person running foldback sound during a live event. The term “foldback” is outdated and refers to the practice of folding back audio signals from the FOH (Front of House) mixing console to the stage in order for musicians to hear themselves while performing. Monitor engineers usually have a separate audio system from the FOH engineer and manipulate audio signals independently from what the audience hears, in order to satisfy the requirements of each performer on stage. In-ear systems digital and analog mixing consoles and a variety of speaker enclosures are typically used by monitor engineers. In addition most monitor engineers must be familiar with wireless or RF (radio-frequency) equipment and must interface personally with the artist(s) during each performance.
Systems engineer is a person responsible for the design setup and flying of modern PA systems which are often very complex. A systems engineer is usually also referred to as a “crew chief” on tour and is responsible for the performance and day-to-day job requirements of the audio crew as a whole along with the FOH audio system.
Audio post engineer is a person who edits and mixes audio for film and television.

[edit]
Education

Audio Engineers come from all backgrounds such as electrical engineering or Fine Arts; many colleges and accredited institutions around the world offer degrees in Audio Engineering such as BS in Audio Production. A great number of production mixers are autodidacts with no formal training.

[edit]
Equipment

Audio engineers in their daily work operate and make use of:
Mixing consoles
Microphones
Signal processors
Tape machines (mainly Multitrack recording tape machines)
Digital audio workstations
Music sequencers
Speakers
Preamplifiers
Amplifiers
Electronic tuner

Tinggalkan sebuah Komentar

Sound Design

Sound design is a technical/conceptually creative field. It covers all non-compositional elements of a film, a play, a music performance or recording, computer game software or any other multimedia project. A person who practices the art of sound design is known as a Sound Designer.

The Academy of Motion Picture Arts and Sciences recognizes the finest or most aesthetic sound mixing or recording in film with the Academy Award for Best Sound,[1] historically given to an English-language film. The new Tony Award for Best Sound Design is to be awarded for the best sound design in American theatre.[2]

Sound Design can also be defined as: “The manipulation of audio elements to achieve a desired effect.”

Tinggalkan sebuah Komentar

Director of Audiography

The Director of Audiography (DOA) or Sound Director (SD) or Audio Director (AuD) is the designer and manager responsible for the audio experience in a filmmaking. The responsibilities range from the sound concept, design, planning and initial budgeting in pre-production through to recording and scheduling in production and coordinating the final mix in post-production and overall quality control of the audio process in filmmaking.

The DOA is mostly found in Bollywood productions where music is a vital part of the genre. The SD was once a recognised role in Hollywood prior to the 1990s, however today this role is largely reduced to either sound designer and sound engineer (in post-production) or sound mixer (in production). Hollywood films are normally dialogue-based, and even this is often re-recorded in post-production using a technique called ADR.

A tension exists between the visual and aural dimensions of filmmaking (see Coffey) which is reflected in film history, where silent films preceded the “talkies”. Production sound crew often complain at the lack of consideration given to audio issues in some productions. Having a DOA or SD helps alleviate such pressures by providing a powerful presence to defend the dimension of sound in filmmaking. The absence of a DOA or SD can result in a production company failing to plan effectively or budget realistically for sound.

Hollywood sound editor David Yewdall bemoans the loss of the SD and tells the true story of how the film producer of Airport failed to understand the importance of recording aircraft sound effects during a shoot, costing the film additional expense in post-production. Every dimension of filmmaking requires specialist attention; none less than sound, which requires the detailed planning and coordination of an experienced DOA or SD to assure the sound quality of any modern film.

This role should not be confused with that of Recording Director, who was the head of sound recording at a major Hollywood studio in the pre-60s. Douglas Shearer was the Recording Director of MGM until 1952. Usually this was the only role credited to sound on those early MGM films.

Comments (1)