I’ve been asked this question at least once a week, my entire life. From strangers, new friends, old friends and even the occasional family member. It’s a reasonable question, it usually comes from a good place, and I don’t want to answer it anymore. So instead I’ve directed you here, where you can read the complicated answer, to your complicated question.
But before I answer, I want you to seriously and carefully reflect on what you are asking, and why.
Do you think you’ll be able to imagine how I hear the world?
Will my answer change how you think of me?
Will it change how you treat me?
I’m not belittling your question, I’m just mystified by what you hope to learn from my answer, and what you will do with that information. If I can’t understand you, I will let you know. If I need an interpreter, or for you to look at me when you’re speaking, I will tell you. I have a lifetime of experience being Deaf.
But you’re not, like, completely deaf, you can still hear right?
In the typical ear, sound waves travel down the ear canal and vibrate the ear drum. The malleus, stapes, and incus are tiny bones that transfer these vibrations to the cochlea which is shaped like snail shell (hence the name). The inside of the cochlea is filled with fluid and lined with stereocilia, a hair-like organelle that translates the mechanical pressure fluctuations within that fluid, to electrical signals. These electrical signals then travel along the auditory nerve to the brain. Finally, and crucially, your brain translates these signals into the thing we call sound.
We most commonly measure a person’s capacity to detect sound via a hearing test, which results in an audiogram 1. The x-axis represents frequency in hertz with lower pitch sounds to the left and higher pitched sounds to the right. The y-axis represents decibels with quieter sound at the top and louder sounds at the bottom. Here is an audiogram with various common sounds plotted in the appropriate place. The bolded letters in the graph indicate where those sounds in English typically land. The general mass of those sounds form the shaded area, referred to as the speech banana. The general idea is, if someone can hear all the sounds in the speech banana, they can hear typical spoken English.
First, some background.
The human ear is a complex structure, comprised of many interconnected parts working in concert (pun intended) to send signals to your brain. Your brain then deciphers these signals into what we call sound. Generally speaking, Deafness is atypical physiology of the ear. Meaning the signals a Deaf person’s brain receives from their ear, is different than that of a typical hearing person. These physiological variations can be caused by genetics, illness, physical damage, or just a lifetime of use.
When I go to my audiologist to get an audiogram made, I sit in a soundproof room without my hearing aids. Pure tones at various frequencies are played over carefully calibrated speakers, beginning quietly and increasing in volume until I can detect it. When these points are plotted on a graph it forms a line. Everything below the line I can hear without my hearings aids and everything above, I cannot.
Now if I may step up onto my soapbox for a minute, your insurance is legally required to cover one audiogram per year and every single person reading this should be doing this every year. The number one side effect of hearing loss is social isolation and hearing health is grossly underappreciated in this country. Your hearing decreases with age and you generally won’t notice the slip. Seriously, find an audiologist and pay attention to your hearing health.
Here is where it starts to get really complicated. An audiogram measures your ability to detect the presence of sound. Nothing more. It does not measure the quality of that sound. It does not measure the depth and detail of that sound. Just that sound is detected by the brain. As far as an audiogram is concerned, the sound coming out of a $30,000 Bang and Olufsen speaker is just the same as the sound coming out of the speaker chip of a $3 greeting card. Sound has occurred and that is all an audiogram can show. This means two people with identical audiograms, may send radically different electrical signals to their brains. One might produce a signal that is muddled and difficult for the brain to process, while the other might produce a nuanced signal that is easily deciphered into speech.
To supplement the audiogram, audiologists will often administer a speech recognition test to Deaf patients. They will calibrate the speakers in that soundproof room to my level of hearing, and read from a list of words and ask me to repeat them back. There is no standardization of this test. What they are really measuring is your ability to understand your audiologist, as they say things like, “say the word, airplane,” slowly and clearly in a silent room. If your audiologist has a speech pattern you are unfamiliar with, your results will be negatively impacted. The field of audiology is predominately filled with white practitioners, meaning this style of testing favors patients familiar with the cadence of white speech. Also it’s the same list of words. I’ve heard these words over and over again for 30 years and I know what to listen for. As far as I’m concerned this test is unscientific and generally unhelpful.
There are more sophisticated variations of the speech recognition test. These involve playing environmental sounds in the background (restaurant chatter, air conditioning, road noise, etc) in an attempt to recreate real world scenarios. Then on top of this background noise, they will play prerecorded full sentences. Some are common phrases where it’s reasonable to assume a person can fill in the blanks of what they cannot hear. They will also play unusual and strange sentences that would never exist in the real world to truly gauge what you are actually hearing versus what you are guessing. Again, language variations across racial groups means that these “common phrases” are created by white doctors and are unfairly biased against any non-white patients. The test is typically done with hearing aids, one ear at a time and then with both.
The next concept you have to understand about speech recognition is the idea of aural training. People in the music industry can attest that experience listening to a specific sound, increases your ability to detect and understand that sound. The best way I can describe this is to use the example of a native English speaker learning a language like Mandarin for the first time. There are sounds in Mandarin that do not exist in the English language. Through practicing and listening to Mandarin, that person can literally learn to hear new sounds. Your hearing hasn’t improved, your brain has simply adapted and gained the ability to parse the electrical signals from your ear in a new way, allowing you to comprehend these new sounds. Your brain is pretty cool. Going back to the fidelity issue, your brain still needs enough information from your ear to do anything meaningful with it.
Hearing aids are one of several technological options available to assist the physiology of the ear. Another is cochlear implants 2. When it comes to access and communication the better option will always be to design our communities and spaces with accessibility in mind. That's a much larger discussion for another time. Hearing aids have microphones that pick up sounds around the user and use a sophisticated computer system to modify those sounds specifically for that user. My hearing aids for example, increase the loudness of specific frequencies I cannot hear, as well as pitch shift those frequencies into pitches I can hear better. They also reduce background noise, detect and amplify speech, and connect directly to sound sources like a phone or iPod to help with clarity 3.
Another soapbox moment. The choice to use hearing aids, cochlear implants, or neither is none of your business. Each person is entitled to their own medical decisions, and that’s just what this is. I guarantee you every Deaf person has explored every option available to them. You going on about how your cousin benefited from a cochlear implant is not going to suddenly inspire me to undergo brain surgery. We know more about this than you and we’ve already made our choices for a variety of reasons that we do not have to disclose. These reasons may be financial, hearing aids are extremely expensive and barely covered by insurance if at all 4. These reasons may also be social. If you’re raised in a family that uses ASL, attend a school that is accessible, and your whole community is able to communicate and interact in way that works for you, then why would you undergo irreversible brain surgery? Or they might not care about sound, and that's totally fine too. Like I said before, another solution to overcome communication barriers is to fundamentally change our approach to Deafness by eradicating audism, instead of putting all that on Deaf people. You do not know a person’s circumstances and it feels incredibly invasive to be asked about our medical choices.
With all that background, here is my answer. I was born with malformed or semi-functional stereocilia in my cochlea, due to inherited genetic traits. I’ve also discovered that the three tiny bones in my ears are partially fused and do not move as intended. This is how my ears have been since birth. Despite these atypical features, my ears seem to produce nuanced and detailed electrical signals. Obviously nowhere near what a typical hearing person would have.
This is my audiogram, remember I can’t hear anything above these lines. In other words, I can't hear anything quieter than around 80 dB. Sounds at 85 dB are in theory about the same volume to me as rustling leaves are to you. Speech needs to be in the 95+ dB range for me to hear it as more than a whisper. Medically speaking, I have “bilateral severe to profound mixed hearing loss.” Without my hearing aids I cannot hear myself speak, and any room that isn’t a rock concert is completely silent.
My speech-in-noise-test (the sophisticated but flawed test I mentioned earlier 5) indicates that with my hearing aids, in a controlled environment, I can correctly identify 33% of speech with just my left ear, 11% of speech with just my right ear, and 72% of speech with both ears.
My hearing aids amplify sound up to around 130 dB directly against my ear drum. For comparison the maximum volume of airpods are 100 dB and no I’ve never found a pair of headphones that I can hear.
The sociological factors of my life began at home with language usage. My parents and sibling all learned ASL with me and we used simultaneous communication at home. This means we used spoken English and ASL at the same time. I also began speech therapy with a fantastic speech therapist who did not pressure my parents to stop using ASL at home. There is this outdated concept that a Deaf child will not learn English if they also know ASL. This is not correct. The research shows that access to language, any language is the primary indicator of linguistic success. Somewhat serendipitously, my parents were both multi-lingual ESL teachers before I was born. This meant they treated language as a tool for communication rather than the foundation of cultural identity that many Americans do. They regularly and consistently did aural training with me growing up. This free and easy access to language, switching back and forth as needed between English and ASL, allowed me to catch up with my language development. This fluency also came in handy later when my brother was also born Deaf 6.
I started wearing hearing aids at around 18 months old and I gradually became accustomed to using them. By the time I was around four my language development was back on track for children my age 7. Family friends picked up bits and pieces of ASL as well, meaning baby sitters and other adults in my life were able to communicate in basic ASL with me. I was surrounded by a truly wonderful group of open minded people.
When I started school, I always had an ASL interpreter as well as an FM system 8, both the result of my parents advocating relentlessly on my behalf. From third grade until high school I attended a school that was a mix of hearing, Deaf, and hard of hearing students. Multiple teachers were fluent in sign and almost every student, including the Hearing ones, knew at least some ASL. In high school I switched back to a hearing school and was one of two Deaf kids on campus. In college I never met another Deaf student while I was there.
To this day I still work on aural training by listening to podcasts while reading along to maintain and improve where I can. I intentionally find spoken word content with a transcript to do this. I also seek out accents and voices that I struggle with in an attempt to better understand people in my life with similar speech patterns.
I also control my environment when I can. No loud bars, no large meetings, no phone calls with unfamiliar people. I don’t befriend or hang out with people who don’t look at me while speaking. I don’t hang out with anyone who doesn’t repeat themselves when asked. I also just fill in the blanks when I have to. I just smile and nod, and mirror what you're doing to cover for the fact I have no idea what you’re saying. After thirty years of doing this, I'm genuinely not sure what I've actually heard and what I've filled in on autopilot. Thus my confidence in what I think you're saying, is extremely low.
While I do know a fair amount of Deaf people, compared to those who attended Deaf schools or come from Deaf families, I know practically no one. Having said that, I have never met another Deaf person with an audiogram like mine, who uses English as their primary language, or speaks the way I do. I’ve had audiologists waste 30 minutes checking their equipment because they don’t believe that a person who sounds like me, has the level of hearing loss that I do. Let me repeat that. More than once, trained audiological medical professionals, have grossly underestimated my hearing loss based entirely on my voice.
My boundaries and accommodations with regards to my hearing evolve as I grow. Something that worked for me in my twenties doesn't work anymore in my thirties. Some people can’t wrap their minds around that, and I no longer associate with those people. I figured out a way to have the career that I have while being Deaf. It involves a lot of emails, refusing to attend meetings, and sticking to gigs that let me work on my own. I cannot figure out how to do the next rung up on my career ladder as a Deaf person. I might never move up because I don't know that it will be worth all the extra stress and pressure increased interactions with hearing people will put on me. Harsh but true. It’s because of incredibly accommodating people (shout out to Linda Rebman) that I was able to get my foot in the door and hold a job as PA all those years ago.
A constant complaint I heard (ha-ha) from hearing people during the pandemic was “zoom fatigue.” The idea that listening to a group over a computer required so much more active listening and effort than an in-person meeting. This is the fatigue I feel all the time when interacting in English. In my ideal world, I wouldn’t use hearing aids and I wouldn’t speak. Unfortunately my passion is design, and there are not enough Deaf people in that field for me to work exclusively in ASL with other Deaf people.
Yes, I can hear some sounds. But my ability to understand you, is based on so much more than a percentage, or a decibel amount. It's everything from active listening, to ongoing aural training, to controlling the environments I'm in, to using the latest hearing aid technology, a lot of guesswork, and a dozen other little tricks I've learned throughout my life. I don't really notice the weight I am carrying until I take my hearing aids off for a weekend with Deaf friends, free from sound.
There are some really great hearing people out there, that truly make it worth the effort for me. I’m grateful for all the friendships I have with hearing people, and especially the ones that don’t ask me, but you’re not, like, completely deaf, you can still hear right?
1 - It’s hard to get babies to self report accurately in a hearing test. In these cases you use an ECG or EKG to measure electrical signals in a sleeping baby’s brain. This allows you to detect if that baby’s brain is lighting up in reaction to the presence of sound. This was done with me and I was not at all cooperative. Which tracks.
2 - Cochlear implants bypass the ear canal and eardrum entirely, by implanting electrodes directly inside the cochlea. Then the processor (the external part) transmits the sound through a coil to an implanted component which draws electricity from the brain, and directs it to the cochlear stimulating the stereocilia electrically.
3 - When I’m on a zoom call, I only hear what is coming out of the headphone jack of my computer and literally nothing else. Since most video conferencing software does not output your own audio to yourself to avoid feedback, I can’t hear my own voice in these situations. When I speak it’s just complete and utter silence. I have sent countless emails to every single video conferencing developer asking for the ability to mix in my own audio track, to make their software more accessible to Deaf users. No one has ever responded.
4 - Insurance will often cover the expense of a cochlear implant but not hearing aids, which puts a lot of pressure on parents who just discovered their child is Deaf. Not only are they trying to navigate their very complicated new reality, but they are basically told a cochlear implant is free, while those hearing aids are going to cost you an arm and a leg. I know this isn't the only problem with healthcare in the US but it really bothers me that these crucial medical decisions are being unduly influenced by insurance companies rather than actual medical advice tailored to each unique situation.
5 - I took this test at the Mayo Clinic because yes, I did seriously consider cochlear implants in my mid-twenties. While I am an ideal candidate due to the level of hearing loss I have, my ability to speak and interact “like a hearing person” disqualified me.
6 - My brother is the brilliant ASL linguist and poet, Eric Epstein. You should check out a truly remarkable lecture he gave recently about constructing ASL poems and a wholly new system he invented to classify the poetic and musical features of ASL. The full extent of his genius is available through a series of courses he's created on the subject, at Sign Playing
7 - My mom, Jeanne Hollabaugh, became a big language access advocate due to her own experiences. She went on to co-found the Arizona chapter of Hands & Voices, a parent-led organization dedicated to serving families with children who are Deaf/Hard of Hearing without a bias around communication modes or methodology. She also won an Advocate Award from the Arizona State Commission for the Deaf and Hard of Hearing for her work.
8 - An FM system means my teachers wore microphones and the audio was broadcast directly to my hearing aids over an FM frequency.
Since you've read this far here is one last secret little factoid. The reason my username is iamthatdeafkid, is to subtly and constantly remind my hearing friends that I am Deaf. When people forget this, they suddenly stop looking at me when talking, they play music in the background, they get annoyed when I ask them to repeat themselves over and over. Believe it or not, my username helps me avoid this problem.