A) These three editing intermediaries are really only necessary when editing uncompressed >1080 (2k - 4k) resolution image files, right? Codecs that store in 4:4:4 or 4:2:2?
B) There's really no need to wrap or work as an intermediary for anything already compressed into 4:2:0 H.264 MPEG4 AVCHD codecs, is there? Those are already compressed?
As far as my understanding is:
Intermediary codecs such as ProRes, DNxHD and Cineform are useful for editing uncompressed, or high resolution image files. They compress the file size, whilst keeping a large amount of the image quality.
They are also designed to work really well with NLEs - ProRes was designed for FCP, DNxHD for Avid, and Cineform whilst not necessarily designed for it, apparently works quite well with Premiere.
Files such as .r3d files from a RED are huge and also really difficult to work with in an NLE. They require a lot of processing power to 'decode' them, essentially. By transcoding to an intermediary codec, you are keeping most of the image data, compressing the file size, and putting it into a format that your NLE will be a lot happier with. You can transcode to a low-res proxy file, such as ProRes Proxy to keep file sizes way down. Doing it this way means you edit similar to traditional film - where you'll eventually do an online and re-link to the raw files. You can quite easily transcode .r3d's to something like ProRes 4444 and keep almost all of the quality of the original, even with a log gamma curve to preserve colour data. This way, you don't need to re-link in an online. Many productions use workflows like this (including things like commercials).
When it comes to H.264, MPEG4, AVCHD etc. you are dealing with
very compressed files. With traditional DV, you're dealing with, essentially, frame-by-frame compression, in that each frame is compressed one by one. With H.264, MPEG4, AVCHD etc. even older codecs such as HDV etc. etc. you're dealing with a different type of compression, wherein a number of frames are compressed at once - the parts that stay the same between frames are compressed more, and the parts that change are compressed less. The important part is that multiple frames are compressed at once. With frame by frame compression, when you make a cut, you're simply cutting on a frame. With newer compression, you can be cutting essentially in the middle of compression - or at least compressed frames. Therefore, the computer has to re-compress everything either side of the cut, each time it happens. This puts a lot of unnecessary strain on the computer. By transcoding to the intermediary format, you're just making it easier on the computer. If you have a dedicated, fast computer and/or are only dealing with relatively basic edits, it may not be necessary to transcode, but if you have an older, slower, and/or undedicated machine, or are doing complex edits, perhaps some compositing etc. the computer is going to have to work a lot harder than it needs to.
Others may correct me where I may be wrong.