JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.
Checkout using your account
This form is protected by reCAPTCHA - the Google Privacy Policy and Terms of Service apply.
Checkout as a new customer
Creating an account has many benefits:
As a score mixer, engineer, and recordist, Alan Meyerson has shaped the sound of Hollywood for the better part of three decades. Working on some of the most beloved films of all time with leading composers including Hans Zimmer, James Newton Howard, Trent Reznor & Atticus Ross, Harry Gregson-Williams, Kris Bowers, Pinar Toprak, Tom Holkenborg, and Henry Jackman, to name a few, the GRAMMY-winner possesses a wealth of knowledge and experience that he is more than happy to share.
It would be near impossible to do a deep dive into all the iconic films Alan has worked on, so we did the next best thing – we gave him the Five Sounds With… treatment. Read on to learn about how Alan brings scale to a mix, his favorite microphones, why he loves modern technology, and why he’s not afraid of anyone stealing his sound.
The most interesting thing about the score is there is no orchestra on it – it's a group of very elite musicians being very creative with sound. It was the same for Dune: Part One and the reason that became the philosophy was because Dune: Part One was recorded during the pandemic when we couldn't book an orchestra. We ended up doing something that was more sound-design driven and it sounded so great that Hans decided he'd like to do that for Dune: Part Two as well.
When I get a score to mix by composers who are very good at doing their own sort of pre-mixing, it sounds pretty great already and I have to start figuring out where I can add value to it. Sometimes it's about doing very little, and you feel like you didn't do anything until you compare it to what it was like before, in what we call the ref check. It's a light touch, but it's effective because of that and it really does transform the score into something that sounds much more finished, polished, and professional.
On this score, I did gentle amounts of imaging, modifying, and making things wider or narrower, so that not everything was living in the same place. It was about trying to create an image that was completely in line with the integrity of the piece of music.
I used subtractive EQ in a lot of places where I found frequencies and sounds that were creating clutter. After identifying the unnecessary resonant frequencies, I used the Massenburg MDW EQ to subtract them because it’s really good with the way that it solos the bandwidth and lets you focus on what you're doing. I can easily sing a tone – I hear it in my head and I sing it out loud – and find that frequency almost immediately and remove it.
I use the FabFilter Pro-Q 3 EQ a lot as well because I get some very tight bandwidth cuts and can even sometimes attack a harmonic row if I need to. The first plug-in on the effects chain is the EQ cut, because the way I look at it, the stuff I'm taking out shouldn't have been there in the first place – I don't want that sound to hit my reverbs or any other processing.
I'm all in the box. I still love analog mixing, to be honest with you, but the sessions have gotten so big and the requirements for delivery have gotten so overblown that it doesn't make sense for me to try to build anything outside of the box that might slow down my ability to recall a mix. On something like Dune, I'll finish a mix, and then they'll do four more parts, so I have to call it back up and add those parts – and that can happen several times. But I like doing it like that; it gives me a chance to put it away, think about it, bring it back up, listen, and see how present-day Alan judges the Alan from yesterday in terms of what he did. It’s a fun process. Well, it’s either fun or it’s disastrous! [Laughs]
To make a mix sound big and impactful, it’s about subtraction, once again: it’s about what doesn't need to be in the low end or the midrange, and clearing it out so that the stuff that does want to be there can be accentuated, augmented, and made to sound bigger. It's not about going ‘bigger, bigger, bigger’ – the end result of bigger, bigger, bigger is small!
It's like if you give everything the same stereo width, it's pretty much mono, and there's no movement. So shaping a mix to sound big is a combination of removing frequencies you don’t like and imaging, which is about placing things. I don't really do panning that much anymore, I use more weighted imaging. There’s a plug-in by Leapwing Audio called StageOne that enables you to create width and then, within that width, shift where the center is so that you're not mono panning. It's a more elegant way to do it.
With a more delicate instrument like the duduk, it’s low and thoughtful, but I can't let it disappear, so you have to limit the dynamics a little bit without taking away the sense that the dynamics exist. The best compressor in the world is still my finger – I can ride automation and that helps, and I'll use a couple of different compressors as well. When I'm trying to create an overall increase in loudness, my philosophy is; high threshold, low ratio, medium attack, and medium to fast release. And then I just create a constant riding that it does to keep it up; that's what I do on solo instruments like the duduk or a guitar or on a vocal.
The last thing I use is EQ – I’m talking about boosting now. I try to do everything I need to do without using EQ and then I can listen and go, for example, “It could be warmer” and address that as a global thing as opposed to soloing the sound. Everything has to happen in the context of the music. I love all my sounds that sound great individually, but the truth is that sometimes what's great for the score is not great for the individual sound.
The score had to have that 1930s-40s feel, so I did a lot of research. I listened to a lot of Bernard Herrmann scores, which I loved doing; I listened to the older ones that are in mono and the more modern ones in stereo, so I could find what the patina was going to be.
It was not so much about authenticity, it was about the illusion of authenticity, if that makes sense. If you made the score truly authentic sounding, it wouldn't really work in a modern film because nothing else is authentic sounding like that. So you have to give the illusion of authenticity and that was something that we worked towards. I did many, many versions of the first few cues using different sonic environments, plug-ins, and room simulations.
Mank was probably the third film I did during the pandemic and having musicians record and send back parts was always limited by the fact that everyone has their own way of recording. You could have two violin players where one would be close-micing in a closet, and the other one would be playing in their living room with all this terrible resonance, and matching that up would become really hard.
So I said, “Look, we know we want to get something that is period-specific. I have 24 ribbon mics; why don't I send the mics to all of the players with instructions on how to set up the micing?” That way we set up sort of a standard of a sonic relationship: we took their room out of the equation as much as possible and established the micing distance for each instrument. The players would send me test recordings and I would listen to them and give feedback; it worked really well. They loved the idea that there was some sort of control and that we were being very specific about it.
To record the instruments, we used a lot of Royers: the SF-24 stereo mics and the 122s. I have five Coles 4038s, so we just distributed those on some of the brass stuff. I also have two original vintage RCA 77-DXs, which I got pretty much new in 1977. So, it was recorded using a collection of all of the hip and groovy ribbon mics available today. It would have been fun to get into the Beyerdynamic 160s, but by then we had everything covered.
I then put it all in Vienna Sound Library’s MIR Pro 3D – it's one algorithm that all of your inputs go into – and I could pan everything appropriately for where it should be orchestrally, which in the end didn't matter much because they wanted the score pretty much in mono. [Laughs] Everything got narrower as it went on, which was kind of fun too.
In terms of plug-ins, I don't really remember exactly, but I can imagine that I did some sort of aging process on the sound so there’s some sort of harmonic filter in there. It could have been one of many things: probably something as simple as an EQ combined with a PSP VintageWarmer or some variety of that.
There was probably a decent amount of compression as well on everything because one of the things about scores in those days is that they had no dynamic range! The scores were literally sitting at exactly one volume the whole way – not like a modern pop record where it's a ribbon – but that's the way it was produced in those days, so I'm sure I used compressors to create that effect. It was a very long process but the good news is I had nothing else to do during the pandemic, so I just sat at home and mixed for fourteen hours! [Laughs]
Wow, we’re going all the way back – that's the first Marvel Cinematic Universe movie! It was a great score, and that’s all Ramin Djawadi. I don't remember much other than it was difficult to get everyone happy. [Laughs] They didn't have a sonic signature yet and weren't sure exactly what the approach should be, so poor Ramin was just sent around in circles trying to figure out what to do. Then once we mixed it, there was this global thing of “We need to add more rhythm section”, so all of a sudden, up against a finished mix, we were doing drum kits and stuff like that. It was quite a process!
The thing I remember the most was my son, Joe, playing Guitar Hero with Jon Favreau's son, Max – they were both around five years old at the time. Max was a badass Guitar Hero player and Joe got obsessed with trying to learn how to play Guitar Hero. Now, everyone has their superpowers, and playing Guitar Hero was not one of my son's superpowers but it was Max's. [Laughs] I remember vividly, coming out from the mixing room, and set up in the lounge area was Guitar Hero with Max wailing on a Tom Morello solo and Joe just trying to figure it out – that's what I remember the most.
In terms of mixing rock guitars with orchestra – believe it or not, they fit together pretty well because they're both very complex sounds; they almost blend with each other in a way and it’s not that difficult to help with that in the mix. It might be something as simple as maybe cleaning out a little bit of the low end and or cleaning out a little bit at 200 Hz to do that.
The biggest thing with guitar and orchestra – again I go back to the basics – is not having them both sit in the same place in the mix. If they're both sitting in exactly the same soundstage, it makes it hard to hear. So you either widen the guitars, if they're stereo guitars, or narrow the guitars and have the orchestra be wide. If you can separate them that way, it's usually the most effective way to make it work.
I’ve said before – don’t build mixes in the middle because that is where the dialog is going to sit. So I don't use the center channel very much at all. The exceptions to that are things where you want to give the impression of something that's in mono, like with bass drum and bass – I tend to put an equal amount in all three LCR speakers so that no matter where you sit, it sort of feels like a like it's a mono/center element. Other than that, if there's a vocal solo and it's happening at a time when there's no dialog, I'll put that in the center channel, but I'll never take something like a synth or high percussion and add center to it.
My experience has taught me that what keeps the imaging a little bit wider and more solid is saving the center for just the necessities. I actually don't think in terms of left, center, and right – I think left, right, and surround. It’s almost like I think in 6.1 and then the center is sort of an addition to that where it's necessary.
Wreck-It Ralph was a very special score because the composer, Henry Jackman, had such a great vision for it. Because the film was about video games, he decided that he wanted the world to be built out of video game sounds but he didn't want samples – he wanted video game sounds from the instruments that created them in the first place. So he got himself a collection of 8-bit synthesizers and other sources of game sounds and built that into a world that was then augmented by other cool stuff. The philosophy behind that mix was to keep it very light, very bouncy, and big – not big in the sense of something like Gladiator – but big in the sense that it had a lot of movement and a lot of fun to it.
Henry was very particular about his EQs in terms of removing resonant frequencies. We spent a lot of time doing that because there's so much going on that you have to clear out anything that could possibly get in the way. So again, that is a score that is about subtraction, and with the recording of the orchestra, the goal was to get a very detailed recording without really using spot mics because neither Henry nor I really like spot mics.
In my current setup for recording an orchestra, I always put up the traditional Decca Tree environment, which in my case is a set of FLEA 50s because I don't have original 50s and to be honest with you, I'm glad I don't because the maintenance is ridiculous on those. [Laughs]
My wide mics are a bespoke version of the Brauner VM1 – the KHEs – made by Klaus Heyne. I bought them 25 years ago for $9000 per mic, so you can imagine… Those are good mics!
I do very limited spot micing: two mics per section, usually a small cap condenser like a Schoeps. The front one, which is high over the section leaders, is a cardioid mic and the back one is a wide cardioid mic just to cover some space. I try to use those in as limited a manner as possible.
I have four Neumann M 149s that I've been using on the celli; and on the basses, I combine two different microphones: two U47s and two Sennheiser MKH 800s. For woodwinds, I use the Royer 122Vs – they're actually prototypes so they’re powered differently. They can't take the kind of level that the new ones can, but they just sound beautiful on woodwinds. French Horns are very simple – it's a stereo pair in front of the section, and then a stereo pair by the bells, which I don't use unless I need more detail.
I'm always a work in progress on the brass; I'm always trying things and I have many options – I have two FLEA 49s, which sound fantastic, so I use those a lot or I might do a thing where all of the trombones will be 4038s. I also have a lot of Neumann M 49s: three of the new M 49Vs and two original M 49s.
I bought the M 49V without hearing it, for many reasons, one being that I love the M 49 capsule. With this mic, there was a lot of discussion about whether or not it sounded authentic, and I felt that it sounded great but it didn't have as much reach or gravitas that I'm used to on the originals. Klaus Heyne posted a similar thing online so I reached out to him and said, “I feel the same. Can you help?” He goes, “Send me $1,000 and the two mics.” He then re-tuned the capsules for me and now they sound amazing!
I bought three of them and the very first time I put one up, on an upright bass, the hairs went up on the back of my head! The bass player even commented on it. He was on headphones during the session and he stopped and said, “I've never heard myself like this before. This is unbelievable!” I'm like, “I know, I hear it.” We all heard it!
I'm sure I'll get more, but right now I have eleven M 49s: the three new ones, the two originals, and four of the 149s, which is not really the same mic – it's a little bit different in the low end – but it’s still a very good mic. I also have two FLEA 49s, so I have 11 choices if I want that sound.
There are mid-distance mics on each string section – usually the Mojave MA-1000 on violin 1, violin 2, viola, celli, and then on the bass I'll use something else, like a 44. In terms of the height mics, I'm not that picky about it. I change things a lot just to experiment with what works. I bought these Schoeps V4s; it's their vocal mic, and I've been experimenting with them on piano, on woodwind overalls, and on violas and they just sound great on everything so you pick what's going to be their job in a normal orchestral setup. There's a lot of experimentation involved.
I've also been using an Ambeo Cube environment for a few years now. I started experimenting with these DPA 4041 microphones set up in an array that's basically an Ambeo Cube. I've now purchased five of them and I'm waiting for them to get delivered by Vintage King and that's going to be part of my regular setup.
In the Henry world, it's all those mid-height mics and the room that became the basic sound of Wreck-It Ralph. I spent quite a bit of time figuring all that out – very difficult to mix because it was very dense and busy and yet it needed to sound light and unbusy, but it was a super fun score.
The experiment with The Dark Knight, and with any Chris Nolan movie, is that you have to keep it as close as you can to the mock-up that they've used to build their temp mix. He doesn't do any pre-mixing, he's mixing the whole time, and the mixes he's doing six months before the film are going to end up in the film. So the biggest obstacle to deal with on a Nolan movie is making sure that your level of the overall mix matches exactly the level of the mock-up mix that they have in their Avid already.
There might be switching out to live orchestra or adding other elements without really doing anything to change either the sound or the perspective, or even have an element jump out that didn't jump out before – if you do that, he'll just get rid of it. It’s difficult and very challenging but it’s good practice in patience, humility, and very deep specific listening.
I don't remember anything in particular about what gear I used because it’s so long ago now. There were some cues that had big hits followed by very quiet parts and to balance those, I might have done sidechain compression on the low element to get it out of the way of the Braams so that when it came back in, it came back in at a good level. Or maybe I had some sort of harmonic, like a FabFilter Saturn, a PSP VintageWarmer, Soundtoys Decapitator, or it might be about automating the drive on it.
If I did it now, I’d do it a different way and get the same results, but the technique that I talk about the most is that I have no technique – I really don't. I have ideas and I try the ideas until I find one that works. There's very little stuff in what I do that is going to be the same over and over again because, “This is the way.”
It's why I laugh when people reach out to me and go, “What’s that reverb you used?” It doesn't really matter what it is – I could have done it with five other different things. It's not about how ‘Alan Meyerson uses Relab’s LX480 reverb’, or ‘Alan uses Lowender.’ I do use Lowender, but I could also use a subharmonic synthesizer…
That’s why I'll tell anyone anything about my setup. For example, at Mix with the Masters, I show them everything. The last thing in the world I'm worried about is someone stealing my sound because my sound is created in my head. In fact, it's not even created in my head – my sound is created in my heart and it's very, very visceral and specific to what I'm doing in that very moment. So good luck with that!
If you want to get into my heart and do that, then you're going to have to deal with all the pain that got me here – the years of not making any money and the years of 100-hour weeks and things like that. There is some value to reaching this point – and that's one of the great things about modern technology – where you’re able to act on a sound that you hear in your head and try to create it.
I enjoyed mixing The Dark Knight very much, but also so much has changed since then. This was way before we had the level of technology we have now, so I'm sure a lot of it was about managing pre-mixes and figuring out ways to get the computer to be able to playback 750 tracks, which is easy to do now.
Everyone goes on about how the days of analog were the best and I call bullshit on that because you can do more today than you could ever do before. You really can have an idea and just try and execute it, as opposed to trying to figure out, “Well, how can I get that piece of gear? What do I have to patch in? Where can I bring it up on the console?” I've done a bunch of movies with the director Robert Rodriguez who had this expression – he wants to operate at “the speed of thought”. [Laughs] So, I'm trying to get as close to the speed of thought as I can.
* Required Fields
exclamation-circle