Edited, memorised or added to reading queue

on 10-Apr-2015 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Il modello della dieta pu`o essere visto come una variante del modello di mi- scelazione; il modello classico prevede la determinazione della dieta, o della miscela di cibi, di costo minimo soggetta a vincoli sul contenuto nutritivo.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Un modello piuttosto lontano dai due precedentemente esposti `e il seguente: si consideri la situazione in cui un certo numero n di progetti (ad esempio, lo sviluppo di moduli di un progetto software) devono essere affidati ad un ugual numero n di progettisti o di sviluppatori. Ogni progettista deve svol- gere esattamente un progetto – si tratta quindi di mettere in corrispondenza biunivoca i progetti ed i progettisti (esistono varianti del modello nelle quali non tutti i progetti devono essere svolti o altre nelle quali qualche progettista pu`osvolgerepi`u di un progetto). Per addestrare il progettista i a realizzare il progetto j occorre un periodo di addestramento il cui costo `e stimato pari a c ij . Il problema di abbinamento consiste nel determinare l’accoppiamento progetto/progettista che rende minimo il costo totale di addestramento.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Avoid all the low-frequency pitfalls and learn to achieve the perfect foundation for any mix, with our bass-mixing masterclass...

Mike Senior

H ow do I mix bass? It's a simple question, but compare a dozen records picked at random and you'll see that there's no simple answer. When it comes to instruments, 'bass' can mean (at the very least) guitar, upright, drum or synth. Each can perform many musical roles, and every genre has different conventions for low-end sonics. In this article, I'll help you make sense of all that, whatever instruments or genre you're working with.

Cancellation Insurance

A bass 'sound' is often a combination of several similar signals: for example, electric bass can be multi-miked; a DI signal may be captured; and you might introduce MIDI-triggered layers to fill things out further. Such shenanigans give you tremendous power to refine your sound, but also enough rope to hang yourself, because the layers don't always reinforce each other when mixed. In fact, they can cancel gruesomely at certain frequencies if there are polarity or phase mismatches — so you need a clear understanding of phase and polarity! There's an in-depth article on the SOS web site (/sos/apr08/articles/phasedemystified.htm) but I'll run through the basics.

Phase differences are caused by one signal being delayed relative to another; and polarity differences are caused by one waveform being inverted relative to another. If you're unlucky, the phase/polarity relationship between a pair of similar signals can result in tonal carnage when they're combined, and you must tackle such issues as early as possible.

With multi-mic/DI recordings, a good way to start is to zoom in on their waveforms and try to match them up as closely as possible, so that phase and polarity differences are minimised and you get the strongest reinforcement. Sort out any obviously polarity-inverted waveform first — by either processing the audio region or hitting that channel's polarity-inversion switch — and drag the audio regions to line up better. If judging things visually is tricky, hunt for transients, which tend to be more easily identifiable.

Now to start refining things by ear. Put the first two tracks out of polarity with each other, fade them up to equal levels, and adjust the timing offset between them to achieve the strongest cancellation. Returning to a matched polarity will then give you the fullest composite sound. Repeat this process, adjusting the timing of each new layer in relation to those you've phase-matched.

It's by no means 'wrong' to deliberately mismatch polarity and phase settings to radically transform what was captured (this is art, after all) but creative phase-cancellation is something of a lottery, and there's a tendency for it to mess with the relative balance of different note pitches, thus introducing musical irregularities.

Phase Me Baby, Right Round...

It's often hard to judge the relative polarity and timing offset of mic and DI bass signals by looking at their waveforms, (upper pair). It's easier if you focus on transients, such as the note onset (lower pair). Even then, though, you need to use your ears.

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Mixing Bass
Mixing Bass Mixing Bass How To Craft The Perfect Bottom End Published in SOS September 2012 Printer-friendly version Technique : Mixing Avoid all the low-frequency pitfalls and learn to achieve the perfect foundation for any mix, with our bass-mixing masterclass... Mike Senior How do I mix bass? It's a simple question, but compare a dozen records picked at random and you'll see that there's no simple answer. When it comes to instruments, 'bass' can mean (at the very least) guitar, upright, drum or synth. Each can perform many musical roles, and every genre has different conventions for low-end sonics. In this article, I'll help you make sense of all that, whatever instruments or genre you're working with. Cancellation Insurance A bass 'sound' is often a combination of several similar signals: for example, electric bass can be multi-miked; a DI signal may be captured; and you might introduce MIDI-triggered layers to fill things out further. Such shenanigans give you tremendous power to refine your sound, but also enough rope to hang yourself, because the layers don't always reinforce each other when mixed. In fact, they can cancel gruesomely at certain frequencies if there are polarity or phase mismatches — so you need a clear understanding of phase and polarity! There's an in-depth article on the SOS web site (/sos/apr08/articles/phasedemystified.htm) but I'll run through the basics. Phase differences are caused by one signal being delayed relative to another; and polarity differences are caused by one waveform being inverted relative to another. If you're unlucky, the phase/polarity relationship between a pair of similar signals can result in tonal carnage when they're combined, and you must tackle such issues as early as possible. With multi-mic/DI recordings, a good way to start is to zoom in on their waveforms and try to match them up as closely as possible, so that phase and polarity differences are minimised and you get the strongest reinforcement. Sort out any obviously polarity-inverted waveform first — by either processing the audio region or hitting that channel's polarity-inversion switch — and drag the audio regions to line up better. If judging things visually is tricky, hunt for transients, which tend to be more easily identifiable. Now to start refining things by ear. Put the first two tracks out of polarity with each other, fade them up to equal levels, and adjust the timing offset between them to achieve the strongest cancellation. Returning to a matched polarity will then give you the fullest composite sound. Repeat this process, adjusting the timing of each new layer in relation to those you've phase-matched. It's by no means 'wrong' to deliberately mismatch polarity and phase settings to radically transform what was captured (this is art, after all) but creative phase-cancellation is something of a lottery, and there's a tendency for it to mess with the relative balance of different note pitches, thus introducing musical irregularities. Phase Me Baby, Right Round... It's often hard to judge the relative polarity and timing offset of mic and DI bass signals by looking at their waveforms, (upper pair). It's easier if you focus on transients, such as the note onset (lower pair). Even then, though, you need to use your ears. A specialist 'phase rotation' device allows you to delay different frequencies by different amounts (for links to affordable phase-rotation plug-ins go to www.cambridge-mt.com/ms-ch8.htm#links-phase.) Phase rotation won't change a channel's frequency response in isolation, but it will change the way one layer of a multi-channel sound interacts with others. I find it more time-efficient to grapple with polarity and timing adjustments before faffing with phase-rotation, and there's no point in trying to finesse exact phase relationships if they don't stay consistent (as in the case of most multi-miked acoustic bass parts, where instrument movements will alter the relative path-lengths to the mics, and hence the time-offset). But I do use phase rotation a lot when mixing processed and unprocessed versions of the same bass sound — something called 'parallel processing'. Most DAW systems auto-compensate for a plug-in's processing latency, but some plug-ins (equalisers and amp emulators in particular) generate additional time/phase shifts, and a phase rotator or simple delay line can help to compensate for this. There may also be hidden phase gremlins between the left and right channels of stereo bass-synth patches, which you'll only hear when the channels are mixed to mono. The worst-case scenario is that the low frequencies will cancel badly, and won't make it out of club and PA systems, or single-subwoofer home/car systems. If the phase mismatch is static, adjusting the polarity, timing, or phase response of one channel may help, but if the bass is seriously flaky in mono, you might as well filter it out and layer in a mono sub-bass synth. EQ: The First Two Octaves The 20-100Hz frequency region presents probably the most difficult challenge, as it includes the fundamental frequency of most acoustic/electric bass notes, and maybe a harmonic or two besides for the most seismic of synths. Studio monitoring has a lot to answer for here (see the 'Bass Under Pressure' box), but it's also a question of EQ technique. Be cautious with low-shelving boosts if your monitoring system (including your room as well as your speakers) struggles to convey information below 40-50Hz. Lots of rubbish like traffic rumble and mechanical thuds can be lurking at the spectrum's low extremes, and you don't want to boost this. If you must apply a shelving boost, also use a 20-30Hz high-pass filter for safety. LF shelving filters also continue acting, to some degree well beyond their specified frequency, so if you find you've collected excess low mid-range baggage while trying to boost the true low end, a compensatory peaking cut at 200-400Hz may be in order. Beyond broad-brush decisions, the most common job is compensating for unhelpful resonances. Acoustic bass tracks always seem to feature one or too fundamentals that boom out awkwardly, but room resonances can also afflict miked amp recordings, aided and abetted by the cab's resonant structure. Even the recording mic can play a role, especially if it's one with a frequency response heavily tailored to rock kick-drum sounds. The simplest remedy is to deploy well-targeted narrow-band peaking cuts. Find a pitch that consistently booms undesirably, and loop a representative note. Then sweep around with a narrow peaking filter in the sub-100Hz region to see if you can bring the errant frequency back into a better balance. Boosting with the filter first can assist with finding the right frequency, as can a high-resolution spectrum analyser. A Q value of eight is a reasonable starting point, but be prepared to adjust that by ear: some resonances may affect several adjacent pitches, requiring a wider bandwidth, but otherwise, try to increase the Q value as much as you can (without making the cut ineffective!) to avoid messing with the spectral balance of other notes. Low-end Interactions Amp simulator plug-ins (those from Aradaz, Acme Bar Gig, and IK Multimedia are shown) are often useful for processing bass parts at mixdown, but be careful that phase shifts incurred by the processing don't introduce unwanted phase-cancellation side-effects, especially when using them for parallel processing. No matter how solid your subs in isolation, they won't do you much good if the rest of your arrangement clouds them over, or if they interfere with the low end of other important tracks. For a start, if there's more than one bass part (perhaps a bass guitar layered with a synth bass), I'd usually choose only one as the main low-end source, and high-pass filter the others around 100Hz, to avoid insidious phase-cancellation nasties between their long-waveform LF components, which would be pretty much unfixable with mix processing. The low-end level modulation inherent in some detuned multi-oscillator synth patches is similarly undesirable if you want an absolutely solid low end, so if you can't switch off the patch's detune directly, I'd suggest filtering off the synth's lower octaves and replacing them with a more reliable static sub-bass synth. With multi-mic or 'mic + DI' recordings, you'll often find that one signal provides a clearer low-end than the other(s), and high-pass filtering can again help add focus and definition to the final product. The subjective timbre of the combined sound is heavily dependent on the mid-range, so as long as you don't move your filtering too far above 100Hz, you shouldn't need to worry. High-pass filtering is also handy for removing low-end junk from other instruments in your arrangement, to help the low end of your bass part pop though more cleanly. Full-range keyboard instruments such as synths, pianos and organs warrant special attention, as may orchestral overdubs, found-sound snippets or sampled mix loops, any of which could conceal a lot of unwanted rumble. Doing this has an extra benefit if you're working under less-than-ideal monitoring conditions: if you dramatically undercook your mix's overall LF levels, it's then easier to correct using mastering processes without dredging up a bunch of underlying sludge at the same time. Sub Warfare The most critical sub-100Hz conflict in modern mixes is that between bass and kick drum: their low frequencies are normally responsible for the lion's share of the mix bus's output level, and therefore present the primary headroom bottleneck at mixdown and mastering. The engineer's task is to divide the available headroom appropriately between these two main LF sources. If your bass line needs to relieve people of their fillings (think Nero's 'Guilt' or Pendulum's 'Watercolour'), you're unlikely to have the headroom to put much real low-end on the kick-drum channel: you'll have to move up into the 100-200Hz zone to salvage any beef. Alternatively, if your kick's threatening to wake Godzilla (as on Rihanna's 'Umbrella' or Pussy Cat Dolls' 'When I Grow Up'), you'll have to be sparing with your bass channel's super-low frequencies. Just a phase thing? If your main synth-bass part has a phase/polarity mismatch between its left and right channels, the part's bass levels will suffer in level and/or consistency when those channels are mixed together. You could, for example, be in for a nasty surprise if it's played over a club system, because many PAs sum low frequencies to mono.This doesn't mean to say that producers haven't bust blood vessels trying to square this circle! A time-honoured technique of the dance fraternity, for instance, is to separate the kick and bass parts in time, as epitomised in the simple off-beat cliché of Kylie's 'Can't Get You Out Of My Head' and, more recently, in 3/16th-based syncopated club hits like Inna's 'Déjà Vu' or Chris Brown's 'Yeah 3x'. Another idea you can hear in urban and club-oriented productions is to give the bass most of the sub-bass energy, while ensuring that it's always playing together with a less sub-heavy kick, yielding a convincing illusion that the kick is better LF-endowed than it actually is. Some producers also allow their kick parts to overdrive the mix bus and/or final mastering chain, factoring in the inevitable distortion side-effects while mixing, in order to circumvent apparent kick + bass LF energy limitations — Fifty Cent's 'In Da Club' springs to mind. If you decide on this contentious approach, make the kick-drum sound fairly short and tight, not only as a way of minimising the distortion's audibility, but also to keep the sub-40Hz energy in check; clipping super-low frequencies can easily make a kick drum sound like it's 'folding' or flamming. Most LF shelving filters affect the frequency balance above the point specified by the frequency control, and can therefore add low mid-range mud as well as bass. A small peaking-filter cut around 200-400Hz can compensate for this, as you can see in this screenshot of ToneBoosters' TB_Equalizer. The yellow trace shows the combined effects of band 1's shelving-filter boost and band 2's peaking-filter cut.In productions less fixated on hyping the low end, the clarity and separation of the bass and kick becomes a greater goal, so that they populate the sub-100Hz region in a satisfying manner, whether separately or in combination. EQ can help, by focusing each instrument into different regions of the low spectrum, as well as by cutting any obvious frequency 'hot-spots' that may skew the overall mix tonality when the instruments play together. The 41Hz fundamental of a bass guitar's low 'E' plays into your hands in this respect, as it frees the bottom octave for the kick-drum. If depth of bass tone is important to you (for something smoochy like James Morrison's 'I Won't Let You Go'), you'll want to give the bass as much room in the 40-80Hz region as you can without completely losing the weight of the kick. On the other hand, for tracks where the groove needs to really rocket along (as in the Foo Fighter's 'Rope', for example). the drums can't afford too much of the sluggishness that the lowest octave imparts, and driving the kick's 60-70Hz region harder, at the expense of the bass, becomes a valid trade-off. The same basic principles carry over into electronic styles, but with a greater likelihood of sub-40Hz conflicts. The opportunity to nudge your kick sample's pitch can save a good deal of EQ work, by shifting its frequency peaks into the bass part's natural spectral troughs. Kick-drum pitch adjustments can also help avoid the drum's low-pitched resonances sounding in unison with bass-line harmonics, which once again carries the risk that phase-cancellation will emasculate some hits. Boosting What's Not There! Multi-oscillator detuned bass synth patches can cause mono compatibility problems. If your bass instrument produces no real energy below 40Hz, there's no point boosting down there with EQ. So what can you do instead to underpin your bass with those kinds of frequencies, or, indeed, to replace unsalvageable low-octave dross you've filtered out? Many manufacturers provide processors that promise to generate new low frequencies. They range from simple octaver stomp-boxes to fairly sophisticated subharmonic soft-synths, such as Logic's SubBass, but I've always found them disappointing on real-world bass parts, giving vague, warbly pitching, and responding rather unpredictably to things like guitar distortion, mechanical noises and synth oscillator layering. Instead, I now almost always just program a simple MIDI synth line for the purpose. It never seems to take longer than 15 minutes to tap in the MIDI notes for most chart-orientated productions, and once you've settled the new synth into the mix, it makes light work of achieving dependable low-end power. What synth sound should you use, though? Don't look for flashy presets: dull-sounding waveforms like sines and triangles are well suited, and stick with a single oscillator, to avoid unwanted level modulation. A simple on/off amplitude envelope is fine much of the time, but be prepared to bring the sustain-level control down and introduce some decay time if your production features only lightly compressed acoustic or electric bass. Fast attack and release times can cause unwanted clicks and thuds, though, so listen carefully in solo mode to guard against those. A simple sine-wave sub-octave can be mixed in underneath the existing bass line, but if there's any frequency overlap between the synth and the existing part, things get more complicated. First, you have to decide how much of the sub-bass synth's upper spectrum reaches the mix, and how much of the original part's lower spectrum will remain. For 'black ops' applications, I low-pass filter any non-sine sub-bass waveform fairly severely to keep the more characterful upper frequencies from blowing the 'sub' synth's cover. However, in many cases some low mid-range frequencies from the sub synth do help add warmth to the combined bass tone, which is why I more regularly reach for triangle waves rather than sines for remedial applications. The other issue is that there's a potential for phase-cancellation at low frequencies if any of the added synth's frequencies end up in unison with those on the main bass track. The tricky thing about this is that it's usually sporadic — you might get a troublesome bass-dip for only one note in a dozen, and that might vary with each playback pass if you're triggering the MIDI synth live in the mix. My first response is to bounce my sub-bass synth's output as audio once I've got it mostly working the way I want, so I don't get live-triggering vagaries. Then I solo the combined bass sound (with the sub-bass addition), check through the track for any low-end holes from phase-cancellation, and shift the timing of any offending sub-bass notes to effect a remedy. Out Of The Depths The left-hand spectrogram shows a section of a piano recording, with the lowest note's fundamental at around 130Hz. The energy below this is mostly the ambience and subsonic rumble that's typical of live recordings, especially those made on a tight budget There's more to most basses than sub-100Hz welly: the mid-range determines the instrument's timbral appeal, as well as its audibility under the narrow-bandwidth playback conditions that are typical of the mass market. The difficulty with the mid-range is that most things in a mix are fighting for it! For bass instruments, the main battleground is the 'warmth' region below about 300Hz. Everyone likes the idea of things sounding warm, but if everything muscles in on those frequencies, you'll end up with a 'Glastonbury pullover' (a muddy, woolly mess!). If you can more aggressively high-pass filter some non-bass parts, do so. Make sure the whole track is playing as you progressively elevate each filter's cutoff point, and once you start hearing an undesirable loss of warmth, ease the frequency back down a little and you should be set. For mainstream chart productions, giving the bass a pretty free rein in the low mid-range helps accentuate the part's melodic features, clarifies the music's harmonies, and allows punchy and uncluttered low-end rhythm. For examples, check out how the bass dominates the 100-200Hz region of Pink's 'Feel Good Time' and Little Boots' 'New In Town', as well as more rock-tinged stuff like Maroon 5's 'Harder To Breathe' and Keane's 'Somewhere Only We Know'. Low-mid EQ Tactics Because such clear delineation of the spectrum makes life easier at mixdown, it's tempting to rely on it universally, but more natural-sounding styles benefit from more evenly spread warmth. Sweeping a narrow EQ peak through the low-mids of each track in turn can help locate the main warmth components for every main instrument, and once you know those, you're well equipped to clear out less important frequencies on one track that are obscuring the characteristic frequency features of another — and this is usually more effective than just boosting the bits you like! This kind of EQ'ing can be tough work, and it's not uncommon to be making half-dB adjustments in this range right up until shrink-wrap time. Comparing with relevant commercial productions can be a big help when trying to finalise your decisions, as can your mute buttons. Killing the bass part for a while really highlights other tracks that are over-thickening the mix's mid-range tone, and muting a few suspects will swiftly identify the main culprits. Low mid-range EQ settings are often so finely balanced that they're the first things to go off the boil when the arrangement changes. In this situation, multing (switching individual tracks between more than one mix channel) is definitely your friend, because it allows different EQ for each section. While you may get away with lots of low-mids on your bass part during a sparser verse texture, a barrage of heavy guitars arriving in the chorus will give you a muffled-sounding frequency build-up if you don't scoop out the bass channel for that section. Indeed, in heavy-rock and metal genres, where wide-panned guitars demand a good deal of low beef, you're likely to find yourself pulling out a good deal of the bass part's low-mids. It may leave no more than flap and fizz on the bass channel, but you'll never get the proper stereo 'chug' out of the full mix if you low-cut the guitars instead. In a similar vein, don't be afraid to carve away that region of the bass part where acoustic piano or acoustic guitar is taking centre-stage in a more intimate folk or singer-songwriter environment. Bass Highs One secret weapon at your disposal when mixing any bass part is its higher frequencies (pretty much anything above 300Hz), which bring the bass's unique timbral character to the fore, pushing beyond its functional role in supporting the groove and harmonies to demand more direct attention from the listener — especially on smaller playback devices. The 1kHz zone is good value in this respect, because a boost there neither upsets the mix's warmth/mud compromise, nor sends too much hiss, amp fuzz, pick noise or filter whistle into a mix's 3-6kHz presence/harshness band. With kick dominating at 60-100Hz and heavy guitars above that, it shouldn't be much of a surprise to find that rock and metal bass sounds frequently stake out the 1kHz neighbourhood. "It's sometimes quite shocking to realise how much top end you need to add to bass to make sure it cuts through a track,” remarked celebrated rock mix engineer Rich Costey back in SOS March 2008, for example. "The bass sound in isolation may sound pretty uncomfortable, but in the midst of the swirling din of a dense track, that amount of top end usually works fine.” With this kind of EQ, make a point of regularly checking your results on small speakers. Bass will always be audible on a larger system as long as it has low-frequency content, but if you hear a big audibility drop on the smalls, you probably need to nudge the mid-range. If you dial in a lot of boost, adding a low-pass filter at around 2-3kHz may be wise, so that wide-band HF noise doesn't throw a blanket over the delicate 'air' frequencies of lead instruments and vocals. Adding Harmonics: Layering & Distortion If EQ simply won't deliver the mid-range definition you're after, the recorded bass tone probably has little energy in the spectral pocket you want to fill. One tactic is to double the bass line with a MIDI instrument or additional live overdub, perhaps at the octave. I've done this for a few Mix Rescue remixes (see SOS March and October 2011) and as long as you bracket the addition's spectrum fairly strictly with filtering, you can usually fool the ear into thinking the added instrument is actually an integral part of the bass. Distortion can also produce harmonics, but try out different processors, as they can have widely contrasting characters, and decent freeware distortion plug-ins are ten-a-penny these days: you can find links to some favourites at www.cambridge-mt.com/ms-ch12.htm#links-distortion. I expect to EQ distortion quite heavily to extract only its most pertinent frequencies, especially within an ostensibly clean-sounding style, so I routinely use parallel processing, rather than insert distortion on the bass channel or group bus. An alternative tool here is a dedicated bass-enhancement processor, such as Waves Renaissance Bass or Univeral Audio's Precision Hz. These also generate mid-range harmonics from low bass fundamentals, but in a more subtle and psychoacoustically tailored manner than simple distortion processing, and often with the deliberate aim of making the bass instrument feel subjectively 'bassier' without adding extra sub-bass energy. The danger here, though, is that it's easy to over-egg the woolliness frequencies of your mix, so some compensatory equalisation of the bass-enhanced signal is frequently necessary. A final point to make about EQ is that EQ'ing one channel of a multi-mic/DI configuration, or the return from a parallel distortion effect, will introduce additional phase shift, and may produce an unexpected tonal change. It's not a total no-no, but I find it's better to keep such EQ to a minimum if you've already refined phase and polarity matches, or else to revisit the phase and polarity settings after equalising. Restricting yourself to EQ cuts in this scenario is sensible, since that tends to restrict the main phase-shifts (which often seem to have the subjective effect of making the timbre less 'solid') to areas of the frequency spectrum you want less prominent anyway. Bass Dynamics Acoustic bass and clean electric bass recordings inherently have a dynamic range that's inappropriately wide for most chart contexts, so compression is par for the course. Even expertly programmed synth-bass parts often benefit from some smoothing of unwanted level variations. The goal is normally to place the instrument solidly in a fixed mix position, so ratios of 4:1 or higher are commonplace, as are assertive hard-knee compression curves. However, in less heavily marketed and/or acoustic genres, some of the side-effects of high-ratio compression (gain pumping, loss of note attack, distortion) may be much less welcome than small fluctuations in level, in which case lower ratios with soft-knee transitions make sense — although the gentler action of parallel compression also finds favour with many engineers. Holding the bass's position in the balance mostly requires juggling the Threshold, Ratio, and Make-up Gain controls (or their equivalents), but the attack time parameter can also be very important, especially if you're piling on the gain reduction; too fast, and the compressor will start rounding off individual LF waveform peaks, resulting in distortion; too slow, and the gain-reduction won't catch short-term hot spots, or may over-emphasise note onsets or pick noise. To be fair, both outcomes can be useful on occasion, but the most useful settings for modern productions tend to lie between one and 30 ms. Your release time setting, by contrast, is largely dependent on how prominent you want the note decays, as well as how much gain-reduction you're applying. Set slower, the compressor will retain more of each note's natural envelope, whereas faster settings will reset the gain-reduction more smartly and increase sustain. Finding a good release time is normally pretty straightforward once the attack character's been defined, but if you're applying the processing with a trowel in more intimate instrumental textures, care may be necessary to steer clear of unmusical short-term gain-pumping, especially if there's spill on the recording or short gaps between notes. Tweaking a compressor's attack and release controls will usually affect the amount of gain reduction, so keep an eye on any available metering and plan to adjust the compressor's threshold, ratio and output gain in response to what you see and hear. It's also worth trying out any dedicated RMS level-detection mode (should your compressor have one), since this averages out the fastest level fluctuations and will usually control bass parts more musically. Don't worry if RMS detection doesn't appear to be available, though, because it's standard in many compressor designs, and certainly don't reject a simpler-looking compressor out of hand on this account. (Indeed, some classic compressors closely associated with bass, such as the Gates Sta-Level or Teletronix LA2A, don't exactly overburden the user with controls.) When Compression Doesn't Work No matter how much you sweat over your compressor dials, some bass recordings will refuse to submit to your balance demands without unconscionable trade-offs in the tone or musicality of the line. If the compression only flunks out at certain moments, some audio editing might solve your problem, either by patching over idiosyncrasies with some well-behaved snippets copied from elsewhere, or by multing off troublesome sections for tailor-made remedial measures. Another common problem is where a handful of notes are considerably hotter than the rest, but any compression stiff enough to rebalance them administers the kiss of death to the bass's overall dynamics! A good workaround is to automate a level-drop for those notes pre-compressor (perhaps with a separate plug-in) such that a gentler squeeze can be used. Also quite typical of budget productions is the relative level of sub-100Hz information changing on a note-by-note basis, often on account of the performance — the bass player's pick/finger occasionally not quite connecting with the string properly, say. Because this problem is both time-varying and frequency-specific, it foxes straightforward compression or EQ, and although editing patch-ups, multing, or automated low shelving can all make useful headway, those approaches are depressingly laborious if the malaise is chronic. That's when I fall back on multi-band compression, using just the lowest band at a high ratio (perhaps 8:1) to salvage some evenness. If you want to try this, start with attack and release times of around 5 and 80ms, then lower the threshold to just tickle the most bass-light notes. Normal notes may then be pounded with 8-12dB gain reduction each, but if you now adjust the LF band's make-up gain to return the previous low-end levels, the result should be a significant increase in the bass power of the underplayed notes. The remainder of the job is massaging the threshold, ratio, make-up gain, and attack/release parameters to achieve the best compromise between the low-end rebalancing (which will probably demand high ratios and faster time constants) and the musicality of the whole track (usually better served by lower ratios and slower time constants). Where you put the multi-band compressor in your plug-in chain is not a trivial consideration. Putting it before your main full-band bass compressor has the disadvantage that overall fluctuations in the level of the bass part will affect how strongly your salvage processing reacts, whereas putting it afterwards can cause your main bass compressor to respond rather unmusically to the haphazard low end, because low frequencies tend to exert a heavy influence over any full-band compressor's level-detection mechanism. There are a variety of workarounds available, but I favour putting the multi-band processing before the main bass compressor, to encourage that to respond smoothly. Then I use an automated gain plug-in (or region-specific off-line gain edits on the audio track) to tackle any notes that fall outside the comfort zone of my multi-band plug-in's settings. Downstream Dynamics An additional complication with bass is that it's not just its own processing you have to consider, but also any additional dynamic-range adjustments separating it from the main mix bus. A well-known workhorse technique in rock, for example, is to route the bass and kick-drum channels to a compressed group bus, so the bass is ducked slightly by each kick drum. This allows you to feed more sub-100Hz power from both instruments into the mix, so each sound is weighty when heard on its own; but when the two instruments play together, the compressor kicks in to stop their combined level chomping as much mix headroom. You can rarely push the ducking further than about 2-3dB per hit without the bass line beginning to sound odd, but this little bit of 'smoke and mirrors' is nonetheless slyly effective. So popular is this stunt that numerous ways have been dreamt up for doing it. For example, if you insert a compressor onto the bass channel and then trigger its gain reduction from the kick drum (by virtue of the processor's side-chain input), you get a similar action — a scheme I prefer myself, because you retain independent control over the bass signal post-ducking. Some people also use fast-response mix-bus compression to similar ends, ducking the whole mix (including the bass line) in response to the kick drum, but I'm less enamoured of that approach because of the increased probability that other level surges (from snares, toms, or lead vocals, say) will trigger counter-productive bass-ducking. Even if you're using mix-bus compression in a subtler (and typically slower-acting) 'glue' application, there's a specific bass pitfall to look out for. Consider an archetypal rock verse-chorus transition, where the verse is sparser and tighter instrumentally, while the chorus introduces more sustain generally, as well as some extra high-gain guitar overdubs. In this situation, the mix-bus compressor detects that the average level increases significantly for the chorus, even though the peak levels on your DAW's output meters may not change much. Most well-known bus compressors use RMS level detection, which, you'll remember, responds better to average levels than peaks, so our mix-bus compressor here turns the whole mix down for the choruses — in effect, the extra guitars duck the rest of the band. On the face of it, this isn't a bad thing if that section's goal is to unleash a guitar apocalypse, because making the other instruments sound smaller implies that the guitars must be huge. If the bass guitar loses 2-3dB of level in this way, though, the chorus will lose much of its low-end foundation, downgrading your musical End Of Days to something more like a Plague Of Flies! Once you understand what's going on behind the scenes, it's usually fairly straightforward to counteract the ducking effect by automating the bass fader level or multing that section out to a separate channel for new EQ settings. The Role Of Automation Often, in chart styles, so much compression is applied to the bass that automation offers little benefit from a simple balance perspective. To be fair, though, there are some instances in which the vagaries of frequency masking and/or master-bus compression can cause the subjective levels of the bass to waver undesirably, even if compression is nailing the bass levels to the ground, so you can't take it as read that XXL compression settings will solve balance problems. There always seem to be a few interesting melodic fills or counter-melodies that warrant a bit more of a push, in which case a poke of the group-bus fader may be in order. However, that may not work well for big rides if small-speaker translation is important and/or there's strong mid-range frequency masking from other instruments — by the time you can hear the line on an iPad, the subs will blow the rims off a pimped-out 4x4. If you already have more than one mixer channel allocated to your bass, bumping the level of just one may deliver a more Hummer-friendly alternative. Perhaps you've already high-passed your bass guitar's miked amp signal, or the return channel of a parallel distortion effect, so you could ride either of those up without bloating the lower octaves. In the absence of such options, you could automate a wide mid-frequency EQ boost. In more lightly processed styles, automation takes on greater importance as a general-purpose balance tool, because (assuming you're not on the Eurovision selection committee) your brain is always more musically sensitive than a bunch of circuitry or DSP code. Whether you create automation data with a physical control-surface or your mouse is immaterial, because the main work of automation is listening. As such, my main advice is to monitor from a real image (coming directly from a physical speaker driver rather than the phantom image that hangs in the air between a stereo speaker pair) while you're doing such rides. Sum your mix to mono, switch off one of your speakers, and you'll almost certainly progress with the task more quickly and more confidently. Also, if widespread public appeal is vital, make sure you validate your automation moves on a small consumer system. Even if you're not concerned about the custom of the masses, small-speaker listening at the automation stage can still be useful. For example, if you automate to make your bass dependable on your main monitors, but then find that levels are unreliable on a small speaker, it can be a clue that your monitoring room's resonant modes are interfering with your balance judgements, or that there may be untreated inconsistencies in your bass part's important sub-100Hz region. Mix Effects Bass is rarely treated to heavy send effects at mixdown, largely because the solidity, clarity and power of its harmonic support can be adversely affected. Modulation effects can smudge the tuning or introduce phase-related timbral 'hollowing', for example, while delays and reverbs can drown the groove and muddy the overall mix tonality. If you choose to slather a bass part in effects for creative reasons, I suggest high-pass filtering the effect returns to avoid technical problems. This will keep the sub-100Hz region clear and solid, and prevent stereo modulation from compromising the bass's mono compatibility. If you want the low-end of a reverb or delay to be a real feature at certain key moments (where there's room in the arrangement for the lows to roll around unchecked), then lower the filter's cutoff with automation at those points. Record buyers are so used to hearing bone-dry bass that there's usually very little need for reverbs or delays. If the bass doesn't blend enough with the backing, try a short, natural-sounding stereo reverb patch with carefully restricted low frequencies — not just rolling out the sub-100Hz zone, but typically also recessing the region up to around 500Hz to combat muddiness. I might also process the bass's high frequencies in some way, to prevent pick/fret noises from spraying around the stereo image, especially if they've already been emphasised by mid-range EQ boosts. Such reverb can also widen bass parts that feel underwhelming amongst a wide panorama of heavy guitars or synths, but I usually turn to a simple stereo chorus plug-in myself (often the old freeware Kjaerhus Classic Chorus), again with a high-pass filtered return channel. For more acoustic styles of music, or where you're mixing orchestral double basses, traditional room or hall reverbs can begin to enter the frame, and the bass instruments can begin to be treated in a much more egalitarian way as regards the effects levels. A full overview of generic reverb use is outside the scope of this article, so I'd recommend reading our two-part 'Using Reverb Like A Pro' series from Sound On Sound July and August 2008 if you want more pointers. The Bass Race Every generation of engineers seems to want to get better bass on their productions, so who knows what new discoveries might be just around the corner? For now, though, these tried-and-tested mixing methods should put you well on the way to rivalling the current state of the art. . Bass Under Pressure If you're serious about your bass sound, you need speakers that tell you what's going on below 100Hz, as well as acoustic treatment to prevent the room skewing that information. But even without these, you can improve your LF decision-making. Make a habit of judging the bass balance from a few different points in the room. The room's resonance modes will affect each location differently, so they're easier to factor out mentally. High-resolution spectrum analysis can also help you assess the sub-100Hz region. Some people suggest resting a finger on your woofer cone to gauge sub-bass levels from the drive excursions (as pictured), but I don't recommend it, as a bass note's woofer excursions are heavily dependent on its pitch and can often seem counter-intuitive. Most importantly, compare your mixes with commercial work you admire. Questions of bass frequency balance, dynamic range, mix level and effects use are highly era- and genre-dependent, and commercial tracks are your best guide to your audience's expectations, whether they be Radio 1's millions of listeners or the other member of the Chris De Burgh fan club! Panning Bass Where should you pan the bass? Don't! By leaving it in the centre, you'll get the best low-end projection from stereo speakers and retain good mono compatibility. That said, I've noticed a few releases with bass panned very subtly to one side (Coldplay's 'Paradise', for example, discussed in SOS February 2012's The Mix Review), presumably to achieve a slightly better sense of separation in stereo. There's nothing to lose by experimenting with that, as it doesn't have any significant trade-offs. Reducing Unwanted Noises Broadband hiss in bass recordings is usually easy to handle unless the arrangement is very sparse, because what isn't masked by other instruments can normally be low-pass filtered without any loss of tone. Where long note-decays reveal the noise unduly, try using automation to close down the low-pass filter further as the overall level reduces. Although specialist plug-ins such as ToneBoosters TB_HumRemover can zap mains hum in an instant, you can't just 'set and forget' on bass, otherwise you'll also remove any bass pitches that correspond to your local AC frequency! Again, automating the strength of the plug-in's processing offers a workaround. Low-frequency thuds (perhaps from the musician tapping their foot, jogging the mic stand, or hitting/slapping the instrument's body/strings) can't easily be removed with high-pass filters, and I favour patching over each note using copy/paste audio edits where possible. Where that's unfeasibly tedious, a multi-band dynamics processor swiftly limiting the sub-200Hz region can bring some improvement. Pick noise and fret buzz/squeak can be a pain too, and if low-pass filtering doesn't yield a solution, I normally turn to multi-band limiting again, this time over the upper half of the spectrum, hammering down the undesirable HF surges and spikes. Detailed fader automation can dip out isolated fret squeaks, but can also punch holes in your low end if used during sustained passages. Where a synth bass's upper spectrum is garnished with high-resonance filter sweeping, it can be difficult to maximise the bass's sense of power, warmth and textural thickness without the filter peaks slicing your ears to ribbons. Normal compression and EQ are no help whatsoever, because the filter peaks are always there and move their frequency the whole time. Saturating the sound can help, by increasing the synth's general 'background' level of harmonics in relation to the filter peaks, but sometimes that's not enough. In extremis, I'll split the synth's upper frequency response into half a dozen bands using a multi-band dynamics engine, and set each band to skim the top off the itinerant filter peak whenever it's in range. This way, I've always got one of the compression bands ducking a small section of the frequency response, but the bands are all fairly narrow, so the cure usually sounds better than the disease. One-minute Cheat Sheet: Electric Bass Guitar Check the polarity/phase relationships of mic and DI tracks. Cut back over-eager sub-100Hz harmonics with EQ, using Q values as high as possible. Treat further sub-100Hz inconsistencies with multi-band dynamics processing, or replace those frequencies with a sub-bass synth line. Heavy compression isn't unusual, but take care with attack and release times to avoid unwanted distortion or lifeless dynamics. Compare the mix with relevant commercial records. Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. Mute the bass while you tweak the low mid-range balance of other instruments. To conserve mix headroom, try briefly ducking the bass 2-3dB in response to each kick hit. Boost at 1kHz for better mid-range cut-through, but add a low-pass filter if HF noises become obtrusive. Parallel distortion can be even more effective, but be careful of phase cancellation. Limiting above 1kHz with multi-band dynamics can reduce distracting picking or fretting noises. Multing allows the bass sound to adapt to dramatic arrangement changes, and can also combat any unwanted bass-ducking side-effects of your mix-bus compression. A touch of stereo chorus can connect the bass with wide-panned guitars, but be wary of sub-100Hz energy from the effects return. Use fader automation to draw attention to nice fills or licks, so the listener doesn't miss them. This is easier if listening to single-speaker mono playback. If level rides overload the mix with low end, automate a wide 1kHz EQ boost instead. One-minute Cheat Sheet: Acoustic Bass Check the polarity/phase relationships between separate mic and DI tracks. Cut back over-eager sub-100Hz harmonics with EQ, but keep Q values as high as possible. Tackle remaining sub-100Hz inconsistencies with multi-band dynamics processing, or patch up individual notes using copy/paste editing. Try not to push beyond 9dB of compression, because fader automation will sound more natural. Set the attack time low enough to usefully control the dynamic range, but high enough to leave some life in the note onsets. Parallel compression can exaggerate note sustains more naturally, if necessary. Compare the mix with some relevant commercial records. Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. Kick drum will naturally tend to dominate over acoustic bass in the bottom octave, so try high-pass filtering the latter from around 35Hz. Mute the bass while you tweak the low mid-range balance of other instruments. Boost at 1kHz for better mid-range cut-through, but be mindful of HF noises or spill. Subtle parallel distortion can be effective too, if well matched for phase. Limiting above 1kHz with multi-band dynamics can reduce string slap transients. The global send effects you use to blend together your drums and other instruments should also work fine for the bass. Use fader automation to draw attention to nice fills or licks so that the listener doesn't miss them. It's easier to do this while listening to single-speaker mono. If level rides overload the mix with low end, try automating a wide 1kHz EQ boost instead. One-minute Cheat Sheet: Synth Bass If there are multiple synth layers, avoid LF phase-cancellation difficulties by choosing only one layer to carry the sub-100Hz energy. High-pass filter the rest. Check stereo synth patches for mono compatibility at the low end. Adjust MIDI/synth programming to tackle dynamics concerns. If sub-100Hz inconsistencies remain, address them with multi-band dynamics processing, or replace the frequencies with a sub-bass synth. For layered synth parts, solo all layers together and listen through the whole track carefully. If you discover any LF loss from phase-cancellation, bounce the MIDI parts as audio and adjust the inter-layer timing for offending notes. Compare the mix with relevant commercial records. Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. If your bass hogs the low end, your kick may need more energy than you expect at 100-200Hz. To conserve mix headroom, try briefly ducking the bass 2-3dB in response to each kick hit. Where upper-spectrum filter sweeps are too abrasive, saturation can make them less obvious. Multi-band limiting can go further, but works best with lots of narrow bands. Use fader automation to draw attention to nice fills or licks, so that the listener doesn't miss them. If possible, do this while listening to single-speaker mono playback. Listen & Learn! I've put together a special page on the SOS web site with annotated audio examples demonstrating many of the techniques discussed in the text. For those who'd like to put some of these ideas into practice, there are also links to a selection of freely downloadable multitracks containing acoustic, electric, and synth bass parts, with some notes on the main bass-mixing challenges of each. /sos/sep12/articles/mixingbassmedia.htm Bass Tuning & Timing If you make sure that bass instruments are tuned before recording, bass pitching problems aren't usually a huge issue at mixdown. That's partly because synths and (to a certain extent) fretted basses have pre-quantised pitches, but also because tuning is a relative judgement: even an out-of-tune bass can sound fine if the other parts have been recorded to fit around it! If you do detect some sour notes at the mixing stage, the monophonic nature of most bass parts usually makes it easy to correct them adequately, even with a DAW's built-in pitch-processing. The only time I've bothered to get something specialised like Auto-Tune or Melodyne involved is where the performer of a fretless electric or acoustic upright seems to have been on the bevvies! Bear in mind that your pitch-processing judgements can be biased according to the way you listen. For example, if a bass note's harmonics are slightly out of tune with its fundamental and you adjust the tuning while working on headphones, you might end up with something that sounds more out of tune on a full-range system. Listening level also has an effect on pitch perception, such that you may perceive bass instruments to be shifting subtly flatter the louder you listen. Timing is usually a more pressing concern with home-brew bass tracks. The bass contains so much of the audio power in a track, and is often mixed so loud in modern styles, so it constitutes a powerful driver of the song's groove. It's thus rarely a good idea for its timing to disagree with other important rhythmic elements in the track. It's amazing how much tighter it can make a mix feel if you just ensure that the bass and kick drum are fairly closely aligned, for instance. This doesn't mean just lining up the waveforms by eye (which can get you to a good 'starter' position for each note), as things that 'look' in time can sound out of time. There's also a good chance that the groove might sound better with the bass notes slightly trailing or anticipating the drum hits — so, as with all things mix-related, your ears should always be the final arbiters. Don't just concentrate on note onsets, either, as the end-point of a bass note can also make a big difference to the groove. I've never felt the need for special software for doing bass edits, because crossfaded audio edits always seem fine for the job. Periodically I've tried tangling with time-stretching for bass timing corrections, but I've always ended up feeling that digital chorusing and 'gargling' artifacts induced in the mid-range have been detrimental to the mix tone, so have always reverted back to using simple edits. Most of the time, in the case of bass edits, you can just snip in a gap between bass notes or at a point just before one of the kick-drum beats, and no-one will notice a thing if you apply a few milliseconds of crossfading. On occasion, though, you need to edit in a more exposed location in the middle of a bass note, in which case the trick is to try to match the waveform as closely as possible across the edit point, because any big discontinuity will result in a click. But won't a crossfade just smooth that over? Nope, it'll turn it into a thud, which may well interfere with your rhythmic groove even if it isn't clearly audible in its own right. Even when you've matched the waveform across the edit, though, it's still wise to put in a short crossfade (over a single waveform cycle or so), but try to select an 'equal gain' crossfade if you can, rather than an 'equal power' one, or else you'll get an unwanted level bump at its centre. Published in SOS September 2012 In this article: Cancellation Insurance Phase Me Baby, Right Round... EQ: The First Two Octaves Low-end Interactions Sub Warfare Boosting What's Not There