• Wondering which camera, gear, computer, or software to buy? Ask in our Gear Guide.

Question about dialogue editing.

Should I put each actors voice on the same tracks or should I have different tracks for each? If an actor makes a noise like a laugh or cough, while the other is talking it can then be heard of course. Or I can put just the coughs and other noises on a separate track.

And when applying the room tone, do I have to put it on the same track as the dialogue, in between all the breaks, or can I just put it all on a separate track only, and that works as well? Thanks.
 
Okay thanks. I actually have been using the 'fill left' already, but I thought that just turned it into stereo, my bad.

So is there any reason to have a lot of video shots on separate video tracks as well? I saw a tutorial on video co-pilot where the guy has separate video when he does transition effects, but the effects seem to work the same, even if the video is on the same track.
 
Last edited:
lol, if you only have sound on one track and you use the 'fill'-effect, it makes it stereo indeed, but because both channels are the same then, the sound is centered in the middle. And that's what mono-sound is: centered in the middel.

About the transitions: they work both ways. (I presume you use crossfades? The crazy transitions behave a bit differently when used on the same or different tracks; just test it, I'd say.)
Keeping video on seperate tracks gives you better overview and control of the 'end frame' and 'startframe' of a transition. The transition can't go beyond the end frame or start before the start frame you determined.

We you use a transition between two videoclips on the same track you can get the same result, but you can move the transition untill it rearches the start or the end of either take. So you have to 'search more' for the proper starting and ending point of the transition.
When you want fades, you can work on different tracks and use keyframes for the opacity of the video. In Premiere Pro it works te same way as keyframing volume on audiotracks.
 
Stacking means different clips at the same moment (vertically arranged on different tracks).

That is incorrect. Stacking refers to exactly what I said, duplicating identical audio clips and piling them on top of each other to increase volume. This was actually a common practice used by editors in the very early days of digital editing when NLEs did not have adequate audio implementation. The correct term is layering; this is when several different audio clips are used to create a single sound effect.

I instruct in the use of proper terminology and established protocols for a reason - so when the filmmaker finally has the budget to hand off a project to a professional sound editor they can communicate on a professional level in a common language.

The biggest reason to conform to established protocols is to save money. On most projects I spend a long time just reorganizing badly laid out audio tracks, and trying to figure out which of the stacked tracks is the original.
 
Ok, my bad: English is not my native language, I'm not totally into audio-slang and as far as I knew: stacking is like making stacks: putting things on top of each other. It totally explains why you assumed I was referring to using dublicates to increase volume.
Layering is indeed a better term: why didn't I think of that?

I was trying to explain that what happens vertically in the layers will influence outputlevels, but that what happens horizontally in 1 track does not influence the outputlevel of that track. Since he was considering using different tracks if that's best for the volume. (Offcourse the ironic replies caused that reaction.)

The reasons why you explain things are totally right.
Oversight in the project and being able to communicate with others are most important for an efficient (cross platform/program) workflow.
You explain things at a really advanced level, so you seem to be at least two steps ahead when you explain things.
(Terms like 'phase cancellation' are really weird words when you don't have knowledge about the physical characteristics of sound. :P)

As far as I can tell, Harmonica is in the phase of 'how to do things to achieve something myself'. Once he knows how things work he can move to 'how to organise my project to make sure the audio engineer can make sense of it'. Because then he knows what he is doing and why.

You don't need Newton's Laws or all the trafficrules to teach a child how to ride a bike. He/she just needs to know how to move the legs and practise balancing. Once riding the bicycle is no problem, you start to explain a child what the rules are and why that's important.

So, the way you organize projects is really insightfull information for any of us and your advice valuable. But sometimes you need to 'translate' it and just add a shortlist of things that need to be done. (Or something like that.) Otherwise it just gets confusing for a beginner.
 
Walter - you're coming in in the middle of the conversation. A lot of the things I've mentioned in this thread (like phase cancellation) have been mentioned in other threads - quite a number of them questions posed by Harmonica himself. And it's easy enough to do a quick Google search and get a quick explanation of unfamiliar terms.

And, BTW, he was supposed to have already purchased Dialogue Editing for Motion Pictures by John Purcell, which explains most of the dialog editing process in exquisite detail.
 
Heh...

313959_10150339124474181_501879180_7952986_1608749903_n.jpg


That's what a normal sequence would look like when my cousins editing. clip audio, ADR, enviro sound, gun shots, hits, enviro hits, music (I don't think he had music in this pic yet though), etc. Hehehehh.
 
Last edited:
Looks sexy.

Why isn't ADR included in actor's dialogue track ?

Any special technique or simple tip to have the cleanest dialogue possible ?

I shot a little clip recently where I had to interview I few people and edited very very quicklu (every interviewee would say one word or two no more and I jump to another one). Since it's just interviews, I didn't touch the sound at all, but I was wondering how would I have done was it for a short movie purpose.
 
Okay thanks. I forgot that dialogue editing book by Purcell. My sister was suppose to order it for me, but hasn't gotten back to me in the last few months about it. I'll ask her when she is back from vacation. If she hasn't got it by then I will re-order. Why would I put the ADR on a different track, then the original dialogue track, if it were the same character speaking from the same direction of the scene? I'll find out once I get the book. I'll just make a rough dialogue edit for timing, and wait till the book arrives. That way there will be no more mistakes made on dialogue hopefully. Thanks a lot for the input people.
 
Why isn't ADR included in actor's dialogue track ?

On large budget projects a very large percentage of of the dialog is ADRed. What happens
is the first very rough edit of the film is put together during the shoot. The Supervising
Sound Editor watches these along with everyone else and puts together the ADR cue sheet.
S/he will always err on the side of caution; if there is even the slightest chance the production
dialog cannot be salvaged the scene goes on the ADR cue sheet. A production sound editor will
work with the existing edit and use dialog from alternate unused takes if needed. The ADR is
recorded and an ADR editor puts together the ADR track. Both completed tracks, production
sound and ADR, are presented to the mix team. At the time of the final mix the rerecording
mixer works with the production sound track first, and will do noise reduction on the production
sound. If there is no way to salvage the production sound then, with the assent of the director,
the rerecording mixer will use the ADR. That is why they are on adjacent tracks.

What is surprising is how much of the production sound the rerecording mixers can salvage.
However, they are using the best NR software there is, and often several at the same time.
This requires huge amounts of real-time processing power. It is not unusual to have Izotope RX,
Cedar, Izotope RX and Waves Restoration on a single channel (yes, I know the RX is on there
twice) in addition to an EQ or two and a dynamic processor. Anyway, as I was saying they may
only use a small percentage of the ADR that was recorded (they ADRed 70% of the film but only
used 10% in the final mix).

What is also cool is that some rerecording mixers send ADR tracks out to external processors that
actually add a little noise and personality to the ADR dialog so that it is easier to more closely match
the production sound. In this circumstance it is much easier to do an outboard send and return on
an individual track rather than automating it every time an ADRed line appears.

Again, you have to remember that these are established protocols for the big audio houses, but all of
us little guys follow these protocols assiduously. Why? Because sometimes they farm stuff out, and
when you send your tracks back they had better conform to their work flow, or you won't get any
further work.

Why should an up-and-coming filmmaker be aware of these protocols? Let's say you have $1,000 for
audio post. At $25/hr that's only 40 hours of audio post time. If the sound editor/mixer needs to
spend three or four hours reorganizing the audio that's 10% of your audio post money wasted.


Any special technique or simple tip to have the cleanest dialogue possible ?

Have the production sound recorded by a professional sound team. In audio post have it mixed
by a professional rerecording mixer. That will get the you cleanest dialog possible.

It's all about the proper techniques and the proper tools; you need to learn the same lessons and
do the same things the professionals do. Using budget equipment makes the job more difficult,
and the smaller the budget the more difficult it is because you need to overcome the deficiencies
of inadequate equipment.

I shot a little clip recently where I had to interview I few people and
edited very very quickly (every interviewee would say one word or two no more and I jump to
another one). Since it's just interviews, I didn't touch the sound at all, but I was wondering how
would I have done was it for a short movie purpose.

The approach is different for ENG work, where extremely fast turn-around time is essential - hours
(or even less) instead of months or years. There are also established protocols for ENG audio, but
the capture process is also a bit different. Once you get into the realm of documentaries the protocols
are the similar to narrative projects.
 
Okay thanks for all the input! I can put my ADR on a different track, but I have already blended most of it in so far with sentences from that recorded from the live shoot. I can remove it all and put it on a different track, but it will switch tracks in between words that are spoken of course.
 
Back
Top