Jump to content

Mix Compensation for Hearing Impairment?


Recommended Posts

When mixing, how can I compensate for the fact that I'm partially deaf in one ear? From hearing tests, I know the approximate range and more or less how deaf I am in db.

Is there a way to turn this information into some kind of filter so that—using headphones—I can still get decent results? Or am I just at that point in life where I should forget about recording and just content myself with playing on the porch?

Link to comment
Share on other sites

I'm sure this is a common problem  with the aging boomer generation. I have severely compromised high-frequency hearing acquired in the military. Up until now my strategy has been to mix it as best as I can taking into account my hearing loss and then letting a lot of different people including people on this forum hear what I've done.

If I recall correctly sonarworks plugin has an algorithm that applies correction based on the average frequency degradation for a given age range, but I don't know if that will help you given the nature of your hearing loss.

I think what makes the most sense is knowing what frequencies are compromised and setting up a special EQ that compensates for them.  

However you choose to deal with it,  sitting on the porch hasn't been an option for me.

Edited by Kevin Walsh
Link to comment
Share on other sites

Thanks, Kevin. I found a discussion on another forum about this and most said that despite hearing loss, it didn't seem to affect their mixes. A few said they'd had others with confirmed good hearing listen only to find that the mixes were fine. I suppose the thing to do is carry on and play it by ear. :)

Link to comment
Share on other sites

Izotope tonal balance control can be helpful to get some feedback about how the frequencies of a song match with their preset references. Reference tracks might help just to check and stuff like eq matching to a mix you like.  Porches are fun recording is more fun 😀

  • Like 1
Link to comment
Share on other sites

On 5/24/2021 at 11:02 AM, treesha said:

Izotope tonal balance control can be helpful to get some feedback about how the frequencies of a song match with their preset references. Reference tracks might help just to check and stuff like eq matching to a mix you like.  Porches are fun recording is more fun 😀

Thanks, treesha, and I agree... about the porches. :)

  • Like 1
Link to comment
Share on other sites

I don't think anyone hears identically in both ears. I certainly don't. My left ear is more sensitive to upper mids and high frequencies. I blame it on many years standing stage left with bands blasting into my right ear. Now, when I talk on the phone it's always held to my left ear. To make sure that's not coloring my mixes, I'll flip my headphones around and see if I still like the mix. (I don't mix on headphones, but headphones in a dark room are always my final QA test.)

If the frequency response is similar in both ears, the easiest solution is in the pan control on the master bus (or headphone mix bus if you use one). That control is a simple balance control like the one on your hi-fi labeled "Balance". It just controls the relative left and right volumes.

But if your "bad" ear doesn't register high frequencies as well as the other one, an equalizer can be set up based on the results of your hearing test. Hopefully, the test resulted in a detailed frequency response graph. If not, get a better hearing test from an audiologist who does hearing aids. They have to know the frequency response when prescribing hearing aids, which have built-in filters just for such compensation. The results of such a test should allow you to use an EQ plugin the same way an audiologist adjusts those filters.

I'd suggest setting up a headphone mix bus if you don't already have one, and have an audio interface that features extra outputs or that can route specified inputs to the headphones. This is a bus that doesn't go out to the main speakers and isn't involved during exports - it's just for monitoring via headphones. Having a separate bus for your headphone mix not only means you can compensate for hearing imbalances, but also for the frequency imbalances that are built-in to the headphones themselves. Of course, you could also just insert the compensation on the master bus, but then you'd need to remember to bypass those plugins whenever you export.

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...

Significant hearing loss in one ear makes me mistrust my own choices in panning and levels during mixing. Has anybody else experimented with setting the master bus to mono occasionally to see if the levels of the different parts work that way? Maybe this brings the issue of panning laws into the equation.

Link to comment
Share on other sites

A common practice is to set levels/balances in mono first, before any panning.

That's not done to make panning more effective, but rather to help prevent the phenomenon where the mix balance sounds fine in stereo but only when listening in the sweet spot between the speakers. That's because panning creates clarity through separation; lose the separation and you lose the clarity. For example, you might be listening to a mix from the next room and noting that it suddenly sounds inexplicably muddy. 

That doesn't directly address your question, though. A solution is to modify your monitoring balance to compensate for the hearing imbalance. After you've done that, you can be confident that others will hear your pan decisions the same way you do.

I compensate for unbalanced sensitivity between my ears by adding about 2 db of extra gain on my right speaker. I play some white noise, sit smackdab in the middle between the speakers, close my eyes and listen for the "phantom center".  If it sounds centered, I know the right speaker is now correctly compensating for the lower sensitivity in my right ear. I could accomplish the same end using the balance control on the master bus, but prefer to adjust the speaker because it's a set-and-forget solution.

 

  • Like 2
Link to comment
Share on other sites

same here. also switch between mono and stereo, plus use my single mono speaker physically set dead center and i tweak the levels on the speakers and the monitor controller (and sub etc) until they're all balanced for levels and panning. then anything i do in the "console" to pan etc works as expected. i'm wagering many people don't calibrate their system to compensate for deviations in their hearing, it would probably be a surprise to find people with perfectly balanced hearing...

Link to comment
Share on other sites

  • 6 months later...

Has anyone noticed how today's speakers / monitors seem more toppy than older models?  Despite my HF hearing loss my Presonus monitors and Elacs don't half seem to let the top frequencies through. I did have some older Wharfedales which claimed to have a flat frequency response but I always struggled to hear hihats, crashes and sibilants and the like on them.

Link to comment
Share on other sites

I found this video and the accompanying discussion to be quite helpful: 

F**K SECRECY: Hearing Loss and Music Production. Let's talk. 

It covers the problems of mixing with imperfect hearing, and it provides tools and techniques to make sure that the final mix is tailored to the public and not to the sound engineer with impaired hearing.

Edited by Teegarden
additional info
  • Like 1
Link to comment
Share on other sites

On 5/23/2021 at 8:09 PM, rontarrant said:

When mixing, how can I compensate for the fact that I'm partially deaf in one ear? From hearing tests, I know the approximate range and more or less how deaf I am in db.

Is there a way to turn this information into some kind of filter so that—using headphones—I can still get decent results? Or am I just at that point in life where I should forget about recording and just content myself with playing on the porch?

I had a mentor (sadly not with us today) that had the same problem. Here's the thing with the human ear. We perceive more bass in the left ear than we do with right ear. This is also true with Heaphones, it's how they're built.

To test this: Listen to the left side (always the one with the cable connected) with the right side off and than do the same with the right side on the same ear that still receive good information. You will notice that the left side has more bass than the right headphone speaker, with the right headphone speaker having more detail in it. 

The point here is to start swapping the headphones out with monitors and always tilt your head slightly to bring your left ear forward. Manny Maserati does this quite often, but that not to say he is partially deaf in the right side. In a well treated room this will be more noticeable. I recently moved my studio to a space double in size in the backyard, that i had in the house and only starting to notice the results recently again after 5 months of training and adding more acoustic treatment to the new space, to achieve that equivalent of my previous control room. So don't fear using normal monitors. You just need to invest some time in finding the right volume that will work to your advantage and keep training. You wont find it in a loud dB level, this I can bet you. You already have a little advantage here and that is acknowledgement and acceptance on this. 

If mixing does not workout for you anymore there's always the availablity of focusing on composing and writing. You don't have to give up music all together. I'm going towards my mid 30's and I always try to help my peers as much as possible. 

Link to comment
Share on other sites

10 hours ago, Will_Kaydo said:

We perceive more bass in the left ear than we do with right ear. This is also true with Heaphones, it's how they're built.

It' not just bass though is it? Left ear is better at hearing music, right ear is better at hearing speech?  I would have thought headphones are built with  each earpiece having the same audio characteristics, a bit like speaker pairs. 

I didn't know about this left-right difference until recently, it's made me swap left and right channels a lot more when mixing.  

Link to comment
Share on other sites

44 minutes ago, Peter C said:

It' not just bass though is it? Left ear is better at hearing music, right ear is better at hearing speech?  

Exactly. That how every heaphone is built today. From your commercial headphone right up to the high-end studio Hphones with some fairly noticeable than others. 

49 minutes ago, Peter C said:

 I would have thought headphones are built with  each earpiece having the same audio characteristics, a bit like speaker pairs. 

There were some recently that tried to go that route, but with epic fails and went rather commercial with it. "Beats"They have since corrected that issue, but kept their popularity and added deeper bass. 

It's how GOD greated them. Great little secret into how to should mix you're left and right hard panned Guitars or Vocals with every second one swapped. 😜

Unfortunately, does not matter how it's built your right ear will always filter out that lower registery for detail. 

 

 

Link to comment
Share on other sites

  • 3 months later...

Some of the most enduring pop music mixes were produced by a guy who is deaf in one ear. Brian Wilson. I don't know how he managed stereo mixes, maybe he had help. His stuff still gets heavy airplay over 50 years after it was recorded. Sir George Martin produced into his 80's, and you know that he must have had some rolloff by that time.

There is an art to listening that goes beyond the basic sense of hearing. Mix engineers learn to listen to detail, mindfully, which is a talent that is learned. I learned by listening to This Is The Moody Blues on headphones for hours upon hours when I was in high school. Their music and mixes have so much depth and detail that I wanted to explore the space they created.

I have uneven hearing loss in both ears, both because of loud music exposure and age. Also tinnitus. But my ability to listen is still intact.

As @treesha said, listening to reference tracks (I can't stress this enough, it's useful for so many things in the mixing and mastering process) through your monitor system to get a feel for what a balanced mix sounds like is a great idea.

Boz Labs' freeware Panipulator is a great plug-in that lets you mono your mix, swap left and right, and send the mono to one side or other, which is different from listening to mono on two speakers. It also lets you do weirder stuff like flipping phase on one channel to account for poorly configured playback systems. A version of it used to come with SONAR as a ProChannel module.

I also check the overall balance (pan and tonal) of my mixes with a couple of Meldaproduction free tools, MStereoAnalyzer and MAnalyzer.

One more thing: I have a nice porch and have been known to take the laptop out there for some producin' and mixin' with my ATH-M50's on. I'm a porch rocker. 😄

Link to comment
Share on other sites

Interesting topic as this is something I just recently pondered. My wife has used hearing-aides for around 6 years now. She’s always bugging me I need them too! So to keep the peace I booked a test with her audiologist. Turned out he’s a Musician and Audiophile. 

My test came back with my EQ curve dropping off on a steep slope after around 1,200hz. . My low end is 100 % intact. kinda weird for a bass player? 

But this would imply that when mixing I would crank up everything over 1,200 hz!  But the exact opposite seems to happen and I have found myself not having enough high end in my mixes when “ looking” at a spectrograph . 

He explained that our brain learns to compensate over time and this is why even though our input device is broken, our processor makes up for it.
So we can maintain an educated judgment of what a songs balance should be  even with a hearing impairment. 
He advised that I not ever change my monitors!  He also advised that hearing-aides would be beneficial for conversation listening but useless for my music because of the low fidelity of the system. 

So this is good news as I don’t need to spend $3,000 just to hear people. I’d rather listen to music anyway. 

 

 

 

  • Like 1
Link to comment
Share on other sites

  • 9 months later...

One year ago I went diving and had a bit of sinus inflammation… which resulted in barotrauma and hearing loss in my right year. 60db for everything in my right ear. Irreversible. Music listening suddenly became frustrating AF. Here are my perspectives:

1. My ENT doctor told me to get used to it. Commercial hearing aids are not meant for music (they go to 8,000Hz, some to 10,000Hz, but are optimized for the human voice and hearing sounds in the environment-not for music).

2. I tried some expensive hearing aids ($2,000 one). They bring clarity in human voices and I can hear some high frequencies better but lower/mid frequencies sound worse (think a cheap store earbud). Listening to music through those sucks.

3. One essential suggestion from an audiologist: use in-ear monitors, this way the high frequencies have to travel less to the eardrum then inner ear. 

4. Someone suggested in-ear monitors with very high quality drivers able to output amplified high frequencies without distorting, PLUS a hardware stereo EQ with the ability to boost high frequencies 60db or more.

5. Alternatively a stereo plugin that takes everything above 10,000Hz and shifts it an octave lower will make inaudible frequencies perceivable again!

Let me know if you have thoughts on the above.

it’s 2023. I refuse to “get used to it” and miss the joy of listening to music when there is so much technology around that’s supposed to make our lives better, not just flood us with cute cat pics. 

Link to comment
Share on other sites

  • 2 months later...

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...