• Wondering which camera, gear, computer, or software to buy? Ask in our Gear Guide.

How do you handle your post production sound?

I'm curious and wanted to do a quick survey:

What programs do you use to do your sound post production?

How do you marry it up with picture later on?

Do you ever mix your films in 5.1?

What's the most money you would be willing (and feasibly able) to spend on a 2 hour film for professional post audio?
 
I edit footage and sound in premiere.
I have at least two tracks of audio.

I export a small version of the video with sound.
I export each dialogue track as its own file, each file is as long as the project, so most of it might be empty. This makes lining up in the DAW a snap.

Import all these into my DAW (cubase on a different PC) or sometimes CS5 sound booth if Im feeling lazy.
I edit the audio in that program. Add music etc. Export a final master.
I bring it all back together in AFTER EFFECTS where I'm doing the final CC an master render anyway.

Not perfect and prone to time suckage.. but I under stand it.

I have no need for professional services of this nature, so I cant guess how much I would pay for such...
 
FCP for editing video.
Nuendo for audio. I did a 5.1 mix for my film. I broke up the audio into many scenes, because I typically used 3 or 4 reverb types for each room, different tracks for sound effects, etc. If I had just one project file for the entire movie it would probably be 1,000 tracks, it'd be ridiculously confusing, and Nunedo would just crash. So I'd create perhaps 20 final mixes (representing each scene) in stereo and surround sound. Then I'd drop them in and sync them to the big long audio timeline. Then export 1 stereo and 1 surround track to import back into FCP.
If I had hired someone to do the sound it would have cost a fortune and never would have sounded as perfect as I got it. These days indie filmmakers really HAVE to have their own editing system to do their own video and audio editing.
 
I edit and sync the dialogue inside Premiere Pro, export the dialogue to a wav and give it to our audio guy who used Logic the past few productions, but will be using Pro Tools on the next. He scores and does foley and mixes dialogue down to the final mix, bounces it back to wav for me. I marry it to the picture inside of premiere. So far our longest short has been 12 minutes, on a longer project I'd imagine doing several smaller files that line up back to back to make it more manageable.

We've yet to mix in 5.1, but are going to this summer.

I'm not sure I could give a good estimate on a 2 hour feature, would it include just the mix or foley and ADR as well or? Would depend on the project, overall budget and who is mixing it.
 
I cut my dialog in FCP, then Separate each person's dialog onto its own track, export each track as AIFF and pull into Soundtrack Pro to clean and sweeten there.

Every voice starts with a 60/60 cut (low cut @ 60hz and high cut at 60khz) to isolate just the human vocal frequencies... then I remove the background between words/phrases (to be replaced later) for each track.
My dialog usually goes through a compressor/limiter as well (I know alcove has said he disagrees with this practice, but I like the sound of it)
Then a pass for footsteps for each actor
Then a pass for hand noise for each actor
Then a pass for sound fx for the production
Then a pass for ambiance for each scene/shot
Dialog gets balanced, then sent to a bus
Hand and feet get balanced, then sent to a bus
fx gets a bus
ambiance gets a bus.
The busses are then mixed and automated to get them to make the world blend together as best I can before putting a final limiter on the whole shooting match (normally doesn't have to do anything, but it catches anything I've missed) to get it to top out at -6db.

Export the whole thing as a mix down to AIFF, then back to FCP and silence the old tracks while dropping this mix back in (separate exports of dialog and everything else in case we ever need to go overseas with it) to FCP and export the full resolution quicktime, then off to compressor for transcoding to other formats.
 
I'm alright, thanks. I have no NAB plans. I'm supposed to be helping on a 48 Hour project that opening weekend (April 8 -10th). Are you going to be here, during the following week?


Oh, I do my audio in Premiere.
 
Err,

60 kHz is 60,000.

Do you have an EQ that goes to 60k?

If you mean 6,000, then I think this might be something told to you by a sound-man who was fed up with receiving nothing but noisy hissy recordings and automatically lopped off everything above 6k.

There is definite clarity above 6k that can be salvaged in a recording and IMHO I think you should check it out before automatically cutting it.

Send me a sample recording if you want to see what I mean :)
 
I mixed it myself, in Premiere, and I have no clue what I'm doing.

For me, how much I'm willing to pay is not a question I can answer. All along the way, compensation, for everyone, has been a question of how much I can pay. Everyone deserves to get paid, and everyone deserves fair compensation, but I've had the difficult task of deciding how to divide up my meager budget.

I paid my four lead actors. They were each paid a flat-rate, per work-week. My boom-op got the same flat-rate. I paid my composers every cent I had at the time. Every step of the way, it has simply been figuring out how much money I have, and deciding whom to pay it to.
 
Yeah, it's all budget dependent as far as how big the actual budget of the production is. On the super low-end, $25 an hour for the guy just graduating school with an ok reel. On the high, well, I'd go with the skywalker sound team and whatever they charge if there's budget for it.

I'll be at NAB this year as well.
 
Sorry - I misremembered the numbers completely (recording is from 20hz to 20khz, so 60khz is just stupid on my part)... back to the books:

http://en.wikipedia.org/wiki/Vocal_range
Wikipedia said:
In terms of frequency, human voices are roughly in the range of 80 Hz to 1100 Hz (that is, E2 to C6) for normal male and female voices together.

So 6Khz, not 60 (thanks for catching that so I don't sound like a complete idiot when someone goes looking fruitlessly for the filter that can trim that high) is where I start (obviously, I don't just do it blindly, I have good headphones on and adjust it until it just starts cutting into the voice, then back off slightly) -- I was simplifying for the sake of brevity (incorrectly too :( ). If the filter cuts too deeply, I always back off to preserve the natural sound as much as possible.

As with any endeavor in visual/audio media, the numbers will only take you near the result - the rest is up to YOUR senses as a craftsperson. All sounds have higher frequency harmonics produced not just from the voice itself, but from the environment and even from the recording equipment. Things vibrate... that becomes part of their sound as we perceive it. Choosing to keep or eliminate those harmonics is how we sculpt the sound to fit in the overall soundscape.

I think one of the hardest things for folks coming into audio to grasp is the concept of layered frequencies as a cohesive soundscape. The reason orchestral music seems so large is that it very delicately uses the tonal ranges of all the instruments to fill the soundscape from top to bottom. Variations of rhythm in each of the instruments create a fluctuation back and forth from lows to highs that can suggest a rhythm separate from the actual beat of the piece or even the individual instruments. An instrument of a specific frequency (range) played louder than other instruments in a similar range will suppress the listener's perception of the quieter instrument at the same frequency. In recorded audio, this phenomenon has to be created rather than happening naturally as the waves are blended at the source (speaker) rather than naturally overcoming the waveforms.

For an example of this, listen to the soundtrack on Altman's "MASH: the movie". The vocals are difficult to pick out due to all the mingles sound sources at the same frequencies. This was a choice he made that drives me crazy (perhaps the intention)... and it's how I hear in crowds - which is why I don't like crowds, it's physically painful (sidetrack). In the same way that we can draw the viewer's eye to a section of a frame by slightly darkening the rest of the frame with a vignette/power window effect, wee can do the same in the audio by slightly "dimming" the frequency range the part we want to focus on in all of the sound sources other than the one we're emphasizing. As with the vignette, this doesn't have to be a extremely pronounced effect, subtle changes will do a better job by having the effect not be noticeable.

I've completely tangented here and will now return you to your regularly scheduled program.
 
what would you pay for a good amount of ADR, total foley total FX total scoring mixing and all predubs and final print-mastering?
Depends on how carried away you want to get with it. One could spend 2 weeks or they could spend 9 months working on sound.

Beware of low budget recording studios or guys doing sound out of their homes. Often they never audition their mixes through cheap TV set speakers. A mix may sound great through a pair of $1,000 KRK powered speakers but through a cheap 3" speaker a sound mix may distort at even low/mid volume levels.
 
Back
Top