Strangely
enough, when we listen to speech that is in our native language, we don’t hear
everything, not really. Among the factors that contribute to this phenomenon is
that the person we’re talking to might not pronounce some words distinctly or
there may be a background noise. Besides, it’s typical of native speakers to
reduce or omit some sounds, even a few of them in one word, so you can imagine
what happens at the sentence level, let alone longer stretches of listening
input.
So how do
we manage to understand what we hear? Well, it’s all in our minds, or brains,
to be more specific. Our brain is responsible for processing the information we
hear, and to do this job well, it does not solely rely on what we actually
hear. At times, the brain has to fill the gaps in the stream of audio input.
This process is possible, among other things, because the brain stores the
information related to our general knowledge, knowledge of specific subject
matter, life experience, and so on.
Have you
noticed that when listening to foreign speech, it’s easier to understand the
things that you expect to be mentioned? Moreover, sometimes we just know what
somebody else is going to say! It’s because of all that information kept in our
brain, which serves as a solid foundation for further data to be built on.
Normally
audio information is processed in our brain automatically, but on IElTS there
are things you can do to facilitate this process, and this is where
anticipation and prediction come into play.