Achieving a Symphonic Film Score sound with Sample Libraries
by Charlotte Trapasso
Introduction
Symphonic film music for more than the last half-century has largely been defined by the sound of the performers, spaces, and engineers of two filmmaking centers: Los Angeles and London. Whether you are a composer with access to live performances but still subject to the need for excellent demos, or you rely on the use of virtual instruments for your music to be heard at all, the L.A. and London aesthetics are almost certainly production “holy grails” to you. The latter has been the standard for me since 2001 when I fell in love with Howard Shore’s scores for the Lord Of The Rings trilogy.
Much has been said about achieving these standards over the years, and this is an attempt to condense some of what I have learned into what is hopefully a clear and useful approach for having 90s/early 2000s-style recording sessions at your fingertips… at least, as close as virtual instruments will allow.
Note the timeframe specified. My focus is a large orchestra playing together at the same time in a large space. More modern techniques like striping are not considered here.
The Ensemble (Sample Libraries)
The best composers and orchestral contractors understand how to “cast” their scores, and how to write for those chosen players. There is no reason why we should think differently, as VI-based composers. You may assume that some of the libraries recorded with these very same players in these very same rooms are the obvious, best choice for the results we’re after here… but I do think it is a bit more complicated than that!
We all know that every library has strengths and weaknesses. In my view, if something sounds aesthetically right on paper but does not cooperate as fully as I’d like in terms of performance and tightness of programming, there is reason to look to other tools. Some libraries will allow us to sculpt things in a better direction sonically than provided out-of-the-box, while allowing a much more expressive, musical, and clean result than others which may offer more immediate sonic satisfaction, but fall short in those other important dimensions. You can make an expressive, fluent VI sound well-recorded, and match the spirit of its performance to the one you have in mind: London, Los Angeles, or otherwise. It is a much more futile effort to make a well-recorded VI sound more expressive and fluent than it will inherently allow.
My template consists mainly of libraries from Performance Samples, Acousticsamples, and Spitfire Audio, the latter of which has admittedly been getting phased out of this template piece by piece for years. They do capture London session players in favored London recording spaces, but I find myself frustrated enough with other aspects that they are by no means the default choice for me anymore. Libraries with smoother, consistent programming and considerably greater sampling depth for musicality and expressivity are more valuable to me, even if they need some love to sound their best!
Performance Samples embodies this well (obligatory disclaimer that Jasper Blunk is a dear friend). Pacific Strings, for example, is currently the best string library available for my purposes, with refined programming and few limits on expressivity. It was also recorded in an unconventional (but lovely sounding) space with an extremely simple AB microphone array, and close section mics. This requires some alchemy in order to bring it in line with the sumptuous multi-mic approach we expect, but it is well worth the effort. The same is true of Acousticsamples’ VWinds – bone dry, semi-modeled woodwinds which let you perform nearly anything, at the cost of requiring some careful mixing.
It is possible to unite disparate libraries such as these into a unified template with a sonic signature and performance style which is, perhaps, rather far removed from the out-of-the-box sound of its discrete parts, by following some basic principles of performance, recording, mixing, and composition.
Audio example 1 – “Out of the Box”
Microphones
An understanding of the “Decca” method of orchestral recording is invaluable for us. This has been the standard for many years, and while some engineers may try to augment or evolve it, its usefulness as a core philosophy of recording does not go out of style.
Some remedial stuff, then: the Decca Tree is an array of three microphones along the front center of the orchestra, with the center mic reaching into the ensemble a bit, and it forms the backbone of the sound. A wider pair of microphones on either side of the Tree, often called Outriggers, are almost always used as a unit with the Tree. These five microphones represent most of what is needed to capture an orchestra in a good space. Further/higher microphones may be added for more natural depth and reverberation, bringing what may be called the “main array” to seven microphones, or even more if formats like Atmos are being utilized.
Fig.1 – A five-microphone decca tree with a symphonic sized string orchestra
Closer mics are used for greater detail, presence, and clearer imaging of the player or players in question. This can all vary somewhat freely, as the goal with the main array in a good room is to sound “good enough” on its own and embellish from there. Close microphones can bring excitement and sonic interest to the main sound, but in my opinion, at least for the aesthetic being discussed here, subtlety and good taste are key. Most commonly, such applications for close signals would be:
- subtle presence added to the strings, especially the low strings, with the option to enhance low frequencies on the bass section close mics
- subtle clarity for the woodwinds, which may become lost in the mix otherwise, and of course, less subtly when one performer is being highlighted as a soloist
- greater bite and directivity from the horns – and perhaps from the other brass as well when the music calls for their natural rudeness to be enhanced even more
- detail on any momentary soloists across the ensemble, and on keyboards/percussion if necessary
The key with all of these, again, is to approach them as subtly as possible, lest we end up with an imbalanced, clunky, or claustrophobic mix. The blend of the ensemble as captured by the main array (and thereby the intentions of the composition itself) has to be respected and preserved when enhancing it, not obfuscated. Another important note about close mics is that different developers will deliver them with varying positioning out of the box. A centered close signal can be useful for a very prominent solo, but when you are using these mics to enhance the sound of the main array, you must make sure that their position agrees with where the sound source sits in the stereo image of the main array, or things will become muddled quickly. Along similar lines, some developers will give you the option to time-align the close mics to the main mics. This is purely down to preference.
The main microphone array should be thought of as fixed across the whole ensemble: the entire orchestra is heard by these microphones in the same way when recording with everyone live. When you set the ideal balance of the Tree, Outriggers, and Ambient microphones, keep that balance the same on every track, at least as far as possible given the differences in what mics are offered by different libraries. This is fundamental to a unified, consistent ensemble sound.
When the differences in mics offered become difficult to navigate, as with Pacific Strings’ very simple, non-Tree based mic setup, for example, it pays to have ears which know what they need to change, and the mixing skills to do so. Taking Pacific’s AB/close signals and making that sound the way my brain knows a more complex, Decca type recording would sound, is vital to my template. It simply would not work otherwise.
Some key points of this process include:
- Finding the right balance of detail and roominess using the AB and close signals
- Use panning on the close signals to clarify the position of the sections in the room
- Treat the close mics on the low strings for greater clarity, weight, and focus
- Adding an overall string section EQ
- Adding an overall string section reverb
One note about adding reverb at this stage of the template:
For these strings, and for any other instruments which need this sort of enhancement, I use particular reverb plugins & settings to emulate the sound of real space – a subtle and “realistic” effect of larger/more consistent acoustics rather than a lovely glossy reverb as we might add to the entire mix later. Using whichever part of the template has the most natural spaciousness as reference, this layer of “real space” reverb allows me to bring everything else in the template to that same point. Given that this article is focused on an aesthetic to which large acoustics are integral, it’s obvious that this baseline of matched reverb should definitely lean towards big, but clear.
Audio example 2 – Mics and Reverb
Balance
Once your microphones and various tweaks at that level are set and forgotten (with the exception of any signals you may wish to automate over the course of a piece), the ensemble should feel as if it is sitting together quite naturally – spatially and acoustically. You now need to make sure that everything sits together correctly in terms of loudness.
This, unfortunately, does simply require a lot of close listening to the real thing, and good judgement based on that. Most of us are probably better at this than we might think; I have a feeling we all listen to a lot of great orchestral recordings! Direct comparisons of your mock-ups with live recordings are a great way to check these things.
However you go about it, don’t skip this step. It is probably the most glaring issue I hear in beginner virtual instrument use. If your brass, for example, or just even one instrument in the brass section, is fundamentally set too quietly or too loudly against the rest of the orchestra, the brain will rebel against the mismatch of timbre and level. This is SO important to get right! You could also do this before the microphone stage, balancing everything based on the Tree or whatever the closest main mic signal provided is, on its own, but I prefer to do it after.
Audio example 3 – Balanced
Performance
If you can get a template this far, I really believe the hard work is over. That doesn’t mean the rest isn’t important, though. This step may well be the most important, in fact.
My earliest efforts at virtual instrument production were quite bad in many ways. However, one way in which they were never totally bad is that, due to years of obsessive listening and experience in real ensembles, I was very intuitively performing everything in a natural and idiomatic way. The balance might have been off, I may have been using libraries that did not know about the concept of “mic position,” and I may have been bad at mixing in the sense that I simply did not mix… but they at least still sounded like a piece of music!
If I’d drawn notes into the piano roll and called it a day, those early efforts would have been 100% bad. If I’d had the template I have right now, but still drew notes into the piano roll and left it at that, it’d still be really bad, and in a worse way than I believe they actually turned out, because this result wouldn’t sound like a piece of music badly recorded, it would sound like someone piped some awful robotic MIDI output into a nice space. Despite this article mainly revolving around production techniques, I think a good VI performance which is badly produced is preferable to a well-mixed VI performance which is bad. I believe we react to music first, not production, and there are any number of beloved but old/rough recordings in every musical style bearing this out. The music can sing without the production. The production is worthless without the music. Ideally, we do both well, so that the music is served respectfully.
As mentioned in the above “The Ensemble” section, flexible and expressive VIs will allow you to not only perform well, but perhaps to emulate specific mannerisms and attitudes that we expect of film session players. I think this is also mostly an intuitive thing which comes from intimate familiarity with the style in question. How far you may want to specifically focus on this aspect is up to you.
Another “real, important, but maybe a little crazy” thing to pay some attention to is proper intonation within the orchestra, especially the low brass. Excellent players are adjusting their tuning constantly, in the context of whatever harmonies they are making with the other players. In other words, they are not always playing in equal temperament based on “A440” tuning! Samples, on the other hand, do not care or think about this. Some developers prefer to keep their samples tuned loosely, perhaps in part to crudely simulate these variances and avoid sterile, constantly equally tempered harmonies. Other developers will deliver perfectly tuned samples and systems to automate intonation variances in real time, perhaps even based on harmonic context – just as it happens in reality.
But – however well developers tune (or don’t tune) individual samples – many plugins or sample engines simply do not support the mechanisms for automated tuning, so it falls to us to do it manually… if we want it. Do you feel that it is worth riding the pitch bend wheel on every track, OR drawing in pitch bend data on every track, OR copying the full MIDI note data of the entire project onto every track – out of range so it doesn’t actually play, of course – so that Kontakt’s “Dynamic Pure Tuning” script can recognize the harmonic context of the piece and do it for you? It’s worth your consideration and further research, if nothing else.
An inevitable question: does a good performance have to be done in real time with as much live CC manipulation, etc., as possible? In my experience, it does, but others may feel more comfortable drawing notes and data in and doing some or all of the sculpting by hand, and feel that those results are comparable. For me, there is enough tedious post-performance cleanup to do by hand as it is – and it is SO important to bother doing that cleanup! Making sure that things are synced but not robotic, that dynamics are agreeing across the ensemble, that note starts and stops are not sticking out awkwardly… these are small things that make a huge difference to how organic the result feels. I prefer to get as close as possible in real time, and then make small adjustments in the MIDI editor. It’s just more fun than entering/drawing a bunch of perfect MIDI information and then trying to humanize it.
Finalizing
With all of this done, the track should be feeling very much complete. As with the setup of the main microphone array, every step in the process should follow the same principle of keeping things at a level which would be “good enough” without the need for further changes. Don’t let anything fall short with the intention of fixing it in some later step. That never works out well, in anything!
I am very straightforward about this stage, doing whatever subtle polishing is needed across the entire ensemble: a bit of reverb, tape emulation, EQ, compression, and a limiter.
- Reverb is pretty similar whether you’re listening to L.A. or London: Lexicon or Bricasti adding even more splashy, glossy hall; tasty and delicious!
- Tape is also consistent, and anything emulating the Studer A800s for just a little vibe would be the “authentic” flavor
- You shouldn’t need to EQ much after setting things up and performing properly, so this is also about a little flavor and overall sculpting: bringing out and clarifying the low end, and adding some much needed air/sheen to the very high end (I like a Massive Passive for this)
- Light compression to tame any of the sillier dynamic peaks without making it all too polite (I like a Vari-Mu for this)
- The most transparent limiting possible to get things to a good level, cleanly
Audio example 4 – Finalized
Final Thoughts
This is only the broadest overview possible of the subject, hopefully centering the stuff which I’ve found to be the most important to learn over the years; basically, what I would write to myself from ten years ago, if I could, to give her a head start. There are so many other general points to make about good VI practices in general, but that is outside the scope of what I can do here. Even in what I have touched on, so much relies on developing your own ear and instincts than following an exact forumla, and I’d say that’s the most important thing to grasp with any of this. Careful listening can’t be undervalued, nor can careful study of scores so that the most important part of all this – the music – is itself working as it should.



