The Sound Barrier #1: The myth of hearing
40 min
•Nov 3, 20256 months agoSummary
This episode explores how the brain actively edits and reconstructs sound rather than passively receiving it. Through auditory illusions discovered by psychologist Diana Deutsch and the story of Mike Chorus adapting to a cochlear implant, the episode reveals that what we hear is largely a prediction shaped by experience, expectations, and brain processing rather than an accurate representation of sound waves.
Insights
- The brain suppresses echoes and reorganizes sounds in space based on expectations about how the world should sound, not what's actually being presented to the ears
- Top-down processing means our brain uses prior knowledge and experience to predict and interpret sensory input, sometimes creating auditory illusions when expectations don't match reality
- Cochlear implant users demonstrate the brain's remarkable plasticity—the ability to remap distorted electrical signals into meaningful sound through training and adaptation over months
- Music perception through cochlear implants is significantly harder than speech because music requires precise frequency discrimination that current implant technology cannot provide
- Individual perception of sound varies based on life experience, geography, handedness, and familiarity with specific sounds—there is no single 'real' version of what we hear
Trends
Brain plasticity and neuroadaptation in medical device users shows potential for improving outcomes through training protocolsAuditory illusions as research tools for understanding brain processing and perception mechanismsGap between cochlear implant hardware capabilities and music perception—opportunity for specialized audio processing algorithmsGrowing recognition that sensory perception is subjective and constructed rather than objective realityPersonalized audio experiences based on individual brain mapping and adaptation patternsInterdisciplinary approach combining psychology, neuroscience, and audio engineering to understand hearingImportance of user training and iterative device optimization in medical technology outcomesResearch into how familiar vs. novel sounds are processed differently by the brain
Topics
Auditory Illusions and Brain PerceptionTop-Down Processing in Sensory PerceptionCochlear Implant Technology and AdaptationMusic Perception and Frequency DiscriminationBrain Plasticity and Neural RemappingSpeech vs. Music Processing in the BrainPitch Perception and Geographic VariationEcho Suppression and Spatial Sound LocalizationHearing Loss and Assistive TechnologyNeuroscience of Expectation and PredictionAuditory Training and RehabilitationIndividual Differences in Sound PerceptionCochlea Function and Inner Ear MechanicsSensory Processing and SubjectivityMedical Device User Experience and Outcomes
Companies
UC San Diego
Diana Deutsch is a psychology professor at UC San Diego studying auditory illusions and brain editing
University of Minnesota
Matthew Wynn is an audiologist at University of Minnesota who works with cochlear implant users
BBC
Diana Deutsch performed as a page turner at BBC house in the 1950s, a formative experience in her career
People
Diana Deutsch
Psychology professor at UC San Diego who discovered auditory illusions revealing how the brain edits sound
Mike Chorus
Science writer with severe hearing loss who received a cochlear implant and trained his brain to hear music
Matthew Wynn
Audiologist at University of Minnesota specializing in cochlear implant users and auditory perception
Maurice Ravel
Composer of Bolero, the classical piece that became Mike Chorus's auditory touchstone for testing hearing aids
Noam Hasenfeld
Reporter and producer of the episode who reported on Diana Deutsch's research and Mike Chorus's story
Quotes
"I heard a single high tone in my right ear, but alternated with a single low tone in the left ear. Both ears were getting high low sequences, but she wasn't hearing them in both ears."
Diana Deutsch•~12:00
"It just seemed that the world had just turned upside down. I was beside myself. It seemed to me that, you know, I'd entered another universe or I'd gone crazy or something."
Diana Deutsch•~13:00
"It was like my hearing was pouring out of my head like water out of a cracked jar."
Mike Chorus•~38:00
"My brain was saying, okay, this is my voice. I know it's supposed to be a low pitch. However, right now, I'm hearing it as a high pitch. Never mind that. Because I know it's a low pitch, I'm going to interpret it as a low pitch."
Mike Chorus•~48:00
"There's no one real version of the music, but many. And each one is shaped by the knowledge and expectations that listeners bring to their experiences."
Diana Deutsch•~70:00
Full Transcript
Support for the show comes from Anthropic, the team behind Claude. They say that Claude is the collaborator that actually understands your entire workflow. So for developers, that looks like Claude code. It runs in your terminal, reads your code base, and can apparently take on things like writing tests, refactoring, or debugging without you handholding it through every step. Anthropic committed to not running ads in Claude, so when you are deep in something that matters to they say the answer you get is shaped by your question, not by an advertiser's agenda. Ready to tackle bigger problems? Get started with Claude today at Claude.ai. The Toyota Tundra and Tacoma are built to keep going. Back by Toyota's reputation for legendary reliability. Step into a Tundra with the available iForce Max Hybrid Engine, delivering impressive torque and serious towing power. Or take a look at Tacoma with an available power lift gate so gear goes in fast and the adventure keeps moving. Toyota trucks are built to last year after year, mile after mile. So drive one home today, visit toyota.com to find out more. Toyota, let's go places. It's unexplainable. I'm no, I'm Hassan Feld. For a lot of people, figuring out what you're meant to do with your life is a long drawn out process. But for some lucky ones, a career path becomes clear in an instant. Well, I've always been very interested in music, I spent all my time playing the piano and composing and so on. For Diana Deutsch, that moment happened back in the 50s, but it didn't go exactly how she imagined it. My music teacher performed at the BBC third program in the mornings. She was playing piano in a trio. And I was asked to be a page turner. Essentially, she'd be turning the pages of the sheet music so her teacher wouldn't have to stop playing. So I went up to BBC house and I was all of 16 at the time and very excited about doing this. Diana had always dreamed of being a musician. So even just turning pages on the BBC felt like the big time. What happened was I turned the first page, no problem. I turned the second page, no problem. When it came to the third page, unfortunately, my hand jerked and all the pages flew down onto the floor. The poor lady had to, while playing the piano with one hand, pick up the pieces with the other. What was, yeah, it was a terrible experience. Diana came face to face with her dream and she knew with complete clarity that it wasn't for her. It certainly made me realize that being a performing musician was probably not a good idea for me. Instead of aiming for a career as a performer, Diana got into researching the psychology of music, particularly how different people perceive sounds. And she was one of the first people to study this by generating synthesized tones using enormous mainframe computers. One day in 1973, she was experimenting with playing two sequences at the same time. And I had no idea what would happen, but I thought it would be interesting to try. You can actually hear exactly what Diana heard back then, but only if you're listening on headphones. So if you have a pair around, now would be a good time to put them in. I started off with a high tone alternating with the low tone in one ear. And at the same time, a low tone alternating with a high tone in the other ear. High low on one side, low high on the other. And what I heard seemed incredible. I heard a single high tone in my right ear, but alternated with a single low tone in the left ear. Both ears were getting high low sequences, but she wasn't hearing them in both ears. She only heard high tones on the right and low tones on the left. Just as a kind of knee jerk reaction, I switched the headphones around. And it made no difference to what I perceived. The high tones remained in my right ear and the low tones remained in my left ear. If you have headphones on, flip them around. There's probably no difference. I went out into the corridor and pulled in as many people as I could. And by the end of that afternoon, I must have tested. I don't remember how many, but probably dozens of people. And most of them heard exactly what I heard. Diana literally couldn't believe it. I was beside myself. It seemed to me that, you know, I'd entered another universe or I'd gone crazy or something. It just seemed that the world had just turned upside down. I first talked to Diana a couple years ago when I originally reported this episode. And I haven't been able to stop thinking about what she told me. How so much of what we hear isn't the actual world out there. How a lot of it is constructed in our brain. So I decided to make a whole unexplainable series about it. It's called the sound barrier. Over the next four episodes, we're going to explore the limits of hearing and the ways we can break through. From someone relearning how to listen to music after hearing loss, to people trapped by the sounds in their heads, to astronomers who've figured out a way to listen to space. A lot of the concepts in our series flow directly out of this original hearing episode with Diana. So as we get ready to dive deep on sound, I wanted to start here. What are we actually hearing when we're hearing? Hearing. Before we get to all the unknowns, let's start with what we do know about sound. Sound is rapid changes in air pressure that happen when something is vibrating. Matthew Wynn, audiologist, University of Minnesota. So you can think of it in the same way that you think of a wave in a pond. None of the water particles move very far. They just sort of bob up and down, but they set a whole wave into motion. And it's like a domino effect moving through space. This pressure wave travels through the air, and then, you know, a whole chain of events will set into motion in your ear. The wave passes through the ear canal. The eardrum vibrates back and forth. And a few little bones amplify that vibration, sending it deeper toward the cochlea, this spiral-shaped organ in the inner ear that's covered with thousands of hair cells. The cochlea is where the sensory cells are that pick up the sound and turn it into something the brain can use. Pressure waves become electrical impulses, which are eventually interpreted as sound. So this sounds like a long, complicated process, but it's extremely fast. I mean, there's no sense that's faster than hearing. Your ear can do this whole process thousands of times per second. All of that, the pressure waves, the ear vibrations, the transformation to electrical impulses, that's the simple part, the part we know. The complicated part is pretty much going to take up the rest of this episode, because there's a difference between the pressure waves that enter our ears and what we actually end up hearing. If we actually perceived every different sound that came in, we would be utterly confused. Take Matthew's voice, for example. Even in the room that I'm in right now, I'm just in a room in my house, there are echoes all around me, because anytime you have a flat surface on a table, a wall, a computer screen, anything, the sound will in fact reflect off of it. All of these echoes bouncing around should theoretically make sounds really hard to locate in space. And so if we hear that and then hear another echo coming from their wall on my right, and then I hear an echo coming off the ceiling and then my table, how would I know which direction the sound is coming from? It's coming from all directions. But our brain has an answer. Thankfully, our brain knows sounds only come from one direction, and that's the only way the world makes sense. In order to function in the real world, our brain makes a guess. It perceives that first wave of sound coming in, and then every subsequent reflection of that sound, it's like saying, okay, I can suppress you, which is why a lot of people aren't even aware that there are echoes, because our brain is so good at suppressing them. Our brain essentially edits our auditory experience. The way I like to phrase it is that the brain is being nudged in a direction rather than just straight out reading the world. Which is exactly what Diana stumbled across that day in the 70s, when she was flipping her headphones back and forth. It just seemed that the world had just turned upside down. These days, auditory illusions aren't as unheard of as they used to be. But Diana's a big reason why. She's now a psychology professor at UC San Diego, and she's been using computer-generated sounds to study the brain's editor for decades. With that first illusion she discovered, Diana thinks two parts of your brain are disagreeing, the parts that determine pitch and location. That's why you hear a high tone on one side and a low tone on the other, even though they're really on both sides. And after finding that first illusion, Diana couldn't stop thinking about it. Of course, I didn't sleep much that night. This can't be the only illusion that does this kind of thing. Diana started wondering whether she could design other illusions to learn more about the brain's internal machinery. In the same way as, you know, if a piece of equipment such as a car breaks down, you can find out a lot about the way the car works just by fixing what went wrong. So she started brainstorming. I was sort of half asleep and I was imagining notes jumping around in space. And by the next morning, they had sort of crystallized into what I named the scale illusion. The scale illusion. Just like before, this illusion consists of two tone sequences, one in each ear. So there's one channel alone. Some high notes, some low notes. And then the other channel alone. Some more high notes, some more low notes. And then you hear them together again. If you're listening on headphones, you're probably hearing all the high notes on one side and all the low notes on the other, even though those notes are actually jumping from left to right. That's your brain editing the sounds. It's separating them to reflect the way the world usually is. In the real world, one would assume that sounds that are in a higher pitch range are coming from one source and sounds in a lower pitch range are coming from another source. So that's what the brain assumes is happening here. The brain reorganizes the sounds in space in accordance with this interpretation. Just like removing echoes, this kind of brain editing would normally help you make sense of the world. But Diana's illusion is explicitly designed to fool the brain into making a wrong guess. And not everyone's brain makes the same guess. Left handers as a group are likely to be hearing something different from right handers as a group. Right handers tend to hear high tones on the right side. But for left handers, it's more complicated. They're likelier than other people to hear high tones on the left or in even weirder ways. All of this reorganization, the way the brain edits are hearing to help us navigate the real world, it's sometimes called top down processing. Top down processing occurs when the brain uses expectation, experience, and also various principles of perceptual organization to influence what is perceived. Instead of bottom up processing, which is sensing the world and then having that travel up to the brain, top down processing means that our brain is influencing how we hear. In a sense, a lot of what we perceive isn't actually us hearing sound waves hit our eardrum. It's a prediction of what those waves should be. To illustrate this, Diana uses something called the mysterious melody. This is a well-known tune, but the notes are presented in different octaves. For all the non-music folks out there, an octave is basically a standard range of musical notes. In this illusion, the notes stay the same, but which range they're played in changes. So instead of playing Do Re Mi in the same range with all the notes next to each other, you could play Do Re Mi with the notes jumping into a different range. So Diana takes a well-known tune, doesn't change the melody, just changes the range. And the question is, can people recognize this melody? And in fact, people can't recognize the melody. Now listen to a simplified version of the same sequence. In this case, all the notes are in the same octave. Same range. You know what it is. Yeah, indeed, it's Yankee Doodle. And a lot of times when people go back and listen to the scrambled version, they can hear Yankee Doodle in there. When you have a frame of reference for what you're hearing, when you have an expectation, it actually changes what you're hearing. Illusions like this tend to circulate around the internet every once in a while. Like this one where, depending on which word you're thinking of, you might be able to hear either Laurel or Yanny. Laurel. Remember last year when that Laurel versus Yanny thing, everybody's going nuts over? Well, there's a kiddie version of it making the rounds right now. This is from Jimmy Kimmel's show, and he starts by pulling up a clip from Sesame Street of all places. And pay attention to this, because tell me if you hear Grover say one of two things, that sounds like an excellent idea, or that's a effing excellent idea. Are you ready? Okay. Kimmel, what did you hear? It's the first time I heard it, I didn't hear a curse word at all, and then the next 12 times I watched it, the F word was all I heard. But just in case you want one more go at it, here's Grover, maybe making a lot of parents upset. This type of misperception is true to an extent with all our senses. We've all seen visual illusions, or you might remember the debate around the dress, but Diana eventually found that the various ways our brain edits the world, they're not just due to hard-coded differences, like whether you're right or left-handed. Brain editing can vary from person to person based on life experience. To prove this, she asked listeners to determine whether a pattern is going up or going down. For people who know a bit of music theory, this interval is a tritone, which is exactly half of an octave. So to get from note to note, you travel the same distance whether you're going up or down. If you don't know that much about music, all you need to know is that this is a particularly ambiguous pattern. But Diana does something really interesting in her experiment here. She plays the melody in a bunch of registers at the same time. So you might have an extra hard time figuring out if it's rising or falling. And sure enough, you get huge differences from one individual to the other. And this is something that really does surprise people. I hear it going up, and Diana found that other people hear it going up. But some people hear it going down. What's truly mind-boggling is that Diana's found that the difference in how two people perceive this pattern, it might come down to where you grew up. Believe it or not, when Diana compared two groups, people from Southern England and people from California, she found that the English people tended to hear this pattern as rising, whereas the Californians heard that same pattern as falling. Diana's hypothesis is that based on where you grow up, you tend to hear different pitches as low or high. It has to do with a pitch range of the speech to which you have been most frequently exposed, particularly in childhood. So if you hear that first pattern, which goes from the notes D to G sharp as falling, you probably hear this second pattern, which goes the exact same distance from the notes A to D sharp as rising, or vice versa. But ultimately, the mechanics of all this are still pretty much a mystery. Scientists don't really know how all this brain editing happens. I mean, we know that the brain does that, but we don't really know how. In a sense, it's almost like we're all listening to a play performed in our heads, just for us. There's a script, the entire world of pressure waves bouncing around, but how we actually hear it all is up to the performers. In so many ways, our brain dictates how we hear the world. But even though we don't know exactly how our brain does this, there are times when harnessing that brain magic starts to become a lot more important. It was like my hearing was pouring out of my head, like water out of a cracked jar. Coming up after the break, one man's quest to hear his favorite piece of music again. That's next. Support for the show comes from Anthropic, the team behind Claude. If you are the kind of person who goes down a rabbit hole and then stays there, or who keeps pulling at a question until it clicks, they say Claude was built for that kind of thinking. For developers, that looks like Claude code. It runs in your terminal, reads your code base, and can apparently take on things like writing tests, refactoring, or debugging without you handholding it through every step. I texted my friend who uses Claude and told him I was making an ad about Claude and asked why I should use Claude and or Claude code. It's just really good at coding lol, he said. What does that mean? I said. With it, I can build things I wouldn't have time for myself or ability for myself in many cases. Nice, I said. Anthropic says they are committed to not running ads in Claude. So when you are deep in something that matters to you, they say the answer you get is shaped by your question, not by someone else's advertisement taking you out of the deep work. Ready to tackle bigger problems? Try Claude, or free at claude.ai slash unexplainable and see why some problem solvers choose Claude as their thinking partner. Support for unexplainable comes from Shopify. Every worthwhile journey starts with a handful of granola and a brain full of dreams. Dreams like what if I discover a new species or a new genus. Dreams like what if I paint a masterpiece that whispers some eternal truth to everyone who sees it. Dreams of winning the big race car trophy with your big fast race car. Or dreams of building your own thriving business. That last one is a dream Shopify can help you with. Shopify is the commerce platform behind millions of businesses around the world and according to their data at least 10% of all e-commerce in the United States. Start with your own design studio and choose from hundreds of ready-to-use templates to make your online store. You can use their AI tools to help you with copywriting and glamming up your product photos. And then use the platform to build a marketing campaign. You can turn those what ifs into a thriving business with Shopify today. Sign up for your $1 per month trial today at Shopify.com slash unexplainable. Go to Shopify.com slash unexplainable. That is Shopify.com slash unexplainable. This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead use Indeed sponsor jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate see. According to Indeed data, sponsor jobs have four times more applicants than non-sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsor job credit at Indeed.com slash podcast. Terms and conditions apply. Move the camera. Yes. Yes. That's an excellent idea. Unexplainable. We're back. And we've been talking about the mysterious way our brain filters, edits, and even reconstructs the world that we hear. For some people, this kind of brain magic can be interesting to highlight as a party trick. But for others, it can be way more important. Okay. Testing one, two, three. Testing. This is Mike Chorus. So it's like you take the word chorus, just add a T at the end. Mike's a science writer who was born with severe hearing loss, but he was able to use hearing aids. And starting from when he was 15, he became obsessed with Bolero, the famous piece by Maurice Revelle. It was this riotous melange, which such a fascinating drum beat underneath it all really thrilled me and fascinated me. He particularly loved the way the melody would gradually evolve over the course of the piece. Each repetition is on a higher level. It's louder. The resonance is deeper. It's only meets the climax. So it's a very auditorily overwhelming piece of music. He would listen to Bolero over and over and over. It was kind of my piece of music that I would come to again and again and again to test out new hearing aids. So it's always been an auditory touchstone for me. And then one day in 2001, the limited hearing he still had started disappearing. I was standing outside a rental car and I suddenly thought that my batteries had died, my hearing aid batteries. Suddenly the traffic on a nearby highway started sounding different. It was just that sound that you associate with cars going by. But all of a sudden a town more like... You know, if somebody had dumped a whole bunch of cotton onto the highway. Pretty soon, Mike found out he was quickly losing what was left of his hearing. It was like my hearing was pouring out of my head like water out of a cracked jar. So after about four hours after that initial realization, I was essentially completely deaf. It was just such a shocking experience. But Mike was eligible to receive a cochlear implant. It's a surgically implanted device that can offer a form of hearing in some deaf people. Many people in the deaf community prefer to communicate using sign language or lip reading rather than using a cochlear implant. But for some people, especially people who've lost their hearing later in life and want to continue using their native spoken language, cochlear implants can be helpful tools. The cochlea is this tiny spiral shaped organ inside your head. And a cochlear implant is a string of electrodes that's carefully inserted inside that spiral organ. This is Matthew again, the audiologist who actually works with cochlear implant users to help them understand their experience. There's this external part that looks like a hearing aid, but is not a hearing aid. It's a microphone and a computer that analyzes the sound and sends instructions to those electrodes that are inside the ear. The implant essentially bypasses a lot of the ear. It directly activates the cochlea, which then passes an electric signal onto the brain. But cochlear implants don't just reproduce normal hearing. Mike says that reducing sound to digital ones and zeros and beaming them directly into your brain, it can sound strange. It was shocking. It's not at all what I expected. When Mike's implant was turned on, the first thing he did was listen to his own voice. And my voice sounded really weirdly high pitched. I almost sounded like... Yeah, it was that kind of sound. It's like listening to a demented mass. Matthew actually gave me a program he uses as an audiologist to simulate various types of cochlear implant sounds. So here's a general idea of what it might have sounded like to Mike. It was very upsetting. I thought the world would sound pretty much like I heard with hearing aids, just fuzzier. I was completely unprepared for the huge difference in pitches. Because of the way the implants are designed, they tend to make everything seem a bit high pitched. So when you send a signal to any part of the cochlear implant, the brain will interpret that as a high pitched sound, even if it's a low pitched. Which is why everything can sound all mousy. But the interesting thing is, when then just a day or two, I started to hear low pitches again. And part of that, it was my brain adapting to it. My brain was saying, okay, this is my voice. I know it's supposed to be a low pitch. However, right now, I'm hearing it as a high pitch. Never mind that. Because I know it's a low pitch, I'm going to interpret it as a low pitch. Essentially, Mike's brain was editing the world for him. So very quickly, my brain started figuring out, okay, the world sounds really weird, but I'm going to try to fit that into my preconception into what the world is supposed to sound like. He was taking command of his own top down processing. So within hours, I stopped sounding like Mickey Maffs to myself. And then Mike started training. I got the audio books of the Winnie the Pooh books. And I remember the first time I put the tape into the cassette player and played Winnie the Pooh and some bees. I think that's the one. I couldn't make it out at all. It was just completely jibberish. But he also had the physical book. So he read along with the tape. So I was able to start matching up the weird input that I was getting with the words on the page that told me what that input meant. What about a story? Said Christopher Robin. Could you very sweetly tell Winnie the Pooh one? This is what the S sounds like. This is what the phoneme Pooh sounds like. Winnie the Pooh. So it is a process of remapping. According to Matthew, this process of brain remapping is a pretty normal experience for cochlear implant users. Any good audiologist would say to someone if they're thinking about a cochlear implant that when you first get it and it first is activated, you probably won't understand much at all. But over the first six months, maybe the first year, your brain learns to reorganize how it associates sound with meaning. Training is more accessible these days. It's certainly not as DIY as it was for Mike 20 years ago. But this kind of improvement can still be hard to believe. A lot of the people that I've worked with will say, now when I listen to my spouse, it sounds like her voice, which baffles all of us who work in this field because if you look at how the ear is being activated, there's no explanation. I mean, not to be too on the nose, but it's unexplainable, right? So there's no way that that could possibly be true. And yet a lot of people say it. Tweaking settings on the implant does make it work better, but that doesn't account for most of this incredible improvement. A lot of the success of the cochlear implant is really a testament to how strong the brain is working rather than a reflection of the high quality of the sound input. Our brains have an almost uncanny ability to predict language and fill in gaps, even when we hear something muffled or distorted. But while cochlear implants work pretty well for speech, they don't work nearly as well for music. Music is just a much more complicated kind of sound. You need to distinguish melodies and harmonies and textures and most fundamentally, pitches. And an implant only has a small number of electrodes. You have to simplify all the frequencies and you can think of it as like pixelating the sound. Making this even harder, because the cochlea is filled with fluid, it's hard to use electrical pulses to stimulate the exact part that codes for the right frequency. Instead, the pulses kind of spread out around the part that codes for that frequency. Let me make an analogy. Suppose you're playing a note on the piano. You can be really careful and hit the exact key you want, or you can be kind of crude and put your whole hand down on the piano. Like you're going to be in the right ballpark of the note, but you're not going to hit the exact note very clearly. So a cochlear implant is more like putting your whole hand down on the note. It's not a very precise frequency you're hearing. When you take all of this into account, translating music with a cochlear implant can seem almost impossible. The current design of cochlear implants isn't set up really for music. It's set up to understand speech. But I'm wanting my ballerina back. Even though Mike's brain had learned how to edit those high-pitched, tinny sounds to understand speech, music still wasn't the same. It just sounded awful. I'm like, oh my god, you know. It was really shocking because even if it gets twice as good as this, it's still going to be awful. Even if it gets three times as good as this, it's still going to be awful. It was really bad. Mike upgraded the hardware of his cochlear implant. He upgraded the software. He even volunteered as a guinea pig for some tests on new equipment. So I would put on a set of headphones. I've heard the set of beeps and boops. I'm like, okay, which song is that? I'm like, I don't know. It's like, could anybody know? And for me, this was a very deeply frustrating kind of experiment because I know Twinkle, Twinkle, Little Star. I was like, that doesn't sound like Twinkle, Twinkle, Little Star to me. How could this sound like Twinkle, Twinkle, Little Star to anybody else? Researchers I spoke to told me that some cochlear implant users just don't enjoy music that much. It's certainly harder to get used to than speech. And because patients are often told to focus more on improving listening to speech, music can get left by the wayside. But appreciating music through an implant can sometimes be presented as an insurmountable obstacle. You can see this in the movie, The Sound of Metal, where a musician gets a cochlear implant after losing his hearing and then goes to this performance, listening to the song you're hearing right now. In this scene, the movie shows what other people at the performance hear, and then it gradually shifts perspectives to highlight what the main character hears through his cochlear implant. The performance is so upsetting for the main character that he ultimately takes his processor off. He essentially decides not to use his implant anymore. You can find a lot of simulations online like this. So I asked Mike if these kind of simulations, or even ones like the simulations I created of a distorted voice or a distorted ballerro for this episode, if they seem like accurate representations of what music sounds like through an implant. I think you have to be extremely careful when listening to these simulations because basically what those simulations are telling you is this is what the software is giving to the user. That's not the same thing as what the user hears. These are two very different things. When I listen to these simulations and I have listened to them, it does sound a lot like what I heard on day one. It does not sound like what I hear in year 20. For Mike, this was a combination of training himself with careful listening, but also tweaking the settings of the implant because with a lot of practice and effort and time, the experience of listening to music can improve. Yeah, I would listen to music over and over again, and I would try tweaking different settings. And I would go to my audio assistant and say, these pictures sound really fuzzy to me. Can you do something about that? And so she would tweak how much electricity went to different electrons. And so this was an iterative process that went on and is still going on. After years of upgrades, tweaks, training, Mike's noticed some real improvement, but not for all music. Most of the piece of music that I enjoy is music that I heard with hearing aids. It's familiar to me. Mike does listen to some new music, but preferring familiar music, it's a pattern that Matthew notices with his patients too. And I think it's a testament to the brain filling in those gaps, conjuring the memory of what the sound quality should be. The implant sort of gives you just enough that the brain can put together the whole puzzle. And of course, Mike is listening to Balero again. Well, it sounds good. I really enjoy it. But there are things that I know that I'm missing. I know that I'm still not getting some of that intensity and the purity where the music is reaching for a crescendo in each of its iterations. So I know I'm missing that. In a sense, Balero is so familiar. It's almost like language for Mike. Balero sounds really good to me because I know exactly what it's supposed to sound like. This new Balero is certainly different from the version he remembers, but Mike loves the new version. Even though the input I'm getting of Balero is incomplete, and I can hear that it's incomplete, it is still a source of pleasure to me. Ultimately, we don't really know exactly how our brain is able to do this. It can almost feel like magic, how it filters out echoes, how it shifts high tones to one ear and low tones to the other, how it can take a tinny, noisy input and rebuild a new version of Balero. We do this very complex calculation, but I don't think that we really know exactly how it's done. Psychologist Diana Deutch again. There are an awful lot of things about our hearing that we don't understand, and what we hear is often quite different from what in point of fact is being presented. But we do know that the brain is constantly editing, shaping and building the world that we hear. Our brain, our life experience, our familiarity with a piece of music, it all shapes how we hear and what we hear, which raises a pretty fundamental question. When an orchestra performs a symphony, what is the real music? Is it in the mind of the composer? Or is it in the mind of the conductor who has worked long hours to shape the orchestral performance? Is it in the mind of someone in the audience who's never heard it before and doesn't know what to expect? And the answer is surely that there's no one real version of the music, but many. And each one is shaped by the knowledge and expectations that listeners bring to their experiences. The idea that to a very real extent our brains conjure different individual realities inside our heads, on the one hand, it's a clear reminder to be humble. And not just for hearing. No matter how certain we are, what we perceive isn't unfiltered reality. So it's worth questioning ourselves at our most stubborn moments. At the same time though, how cool are brains? I know they're this perfect reminder of our own subjectivity and humility, but I also just can't get over the fact that our brain puts on this fireworks show every day. And that a lot of people using a cochlear implant can tap into this almost magic ability to translate a few electrodes into this new emotionally satisfying experience without scientists really knowing how the whole thing works. There's so much we still don't understand about the brain and how it tries to make sense of the world. And it just makes me that much more excited for everything we're going to learn along the way. This is just the first episode of four in our series, The Sound Barrier. On the next episode, a listener with tinnitus who heard this episode and got in touch to ask how to retrain her brain. I was thinking about the cochlear implant, like how they had to train themselves in a way, you know? That's why I was like, should I reach out? That's next time. As for this episode, it was reported and produced by me, Noam Hasenfeld. I also wrote the music. It was edited by Catherine Wells, Brian Resnick, and Meredith Hodnott, who runs the show. Mixing in sound design from Christian Ayala with an ear from Afim Shapiro. Richard Sema checked the facts. Sally Helm and Joanna Salotaroff gave me tons of sound advice. Jorge Just and Julia Longoria are editorial directors. And Bird Pinkerton watched as the boomerang whacked into the locking mechanism on the carriage. The door slid open, but the octopus wasn't moving. If you want to check out more about Diana Deutsch and auditory illusions, we've got a link in our show description where you can find more illusions to listen to and a ton of info about the illusions she's discovered. Thanks as always to Brian Resnick for co-creating the show along with me and Bird. And if any of you out there have thoughts about the show, send us an email. We're at Unexplainable at Vox.com. You can also leave us a review or a rating or a review listen, which really helps us out. And if you're into supporting the show and all of Vox in general, join our membership program. You can go to Vox.com slash members to sign up. Unexplainable is part of the Vox Media Podcast Network, and we'll be back with episode two of The Sound Barrier on Wednesday. Support for the show comes from Anthropic, the team behind Claude. They say that Claude is the collaborator that actually understands your entire workflow. So for developers, that looks like Claude code. It runs in your terminal, reads your code base, and can apparently take on things like writing tests, refactoring, or debugging without you handholding it through every step. Anthropic committed to not running ads in Claude, so when you are deep in something that matters to you, they say the answer you get is shaped by your question, not by an advertiser's agenda. Ready to tackle bigger problems? Get started with Claude today at Claude.ai slash Unexplainable.