Sound localization

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Sound localization is a listener's ability to identify the location or origin of a detected sound in distance and direction or the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space (see binaural recording).

There are two general methods for sound localization, binaural cues and monaural cues.

Contents

[edit] Binaural cues

Binaural localization relies on the comparison of auditory input from two separate detectors. Therefore, most auditory systems feature two ears, one on each side of the head. The primary biological binaural cue is the split-second delay between the time when sound from a single source reaches the near ear and when it reaches the far ear. This is technically referred to as the "interaural time difference" (ITD). In an animal with a larger head, the inter-aural distance is larger. In the extreme case, where the sound source is located all the way to one side, the time taken for the sound to traverse from one ear to the other is maximum. So animals with larger heads have a larger maximal ITD. Smaller animals have much smaller heads, and smaller maximal ITDs. For humans, ITDmax = 0.63 ms. The ability of animals to use ITD depends on the sound being approximately equally loud in both ears. However, at higher sound frequencies, the size of the head becomes large enough that it starts to interfere with sound transmission. With the sound source on one side of the head, the ear on the opposite side begins to get occluded: this is called the head-shadowing effect. As a result, ITD cannot be used as a binaural localization clue at higher sound frequencies. At the higher end of our range of hearing, we use another binaural cue that results from one ear being occluded or shadowed, and thus receiving sounds at a lower level. This is referred to as the "interaural amplitude difference" (IAD) or (ILD) as "interaural level difference". This is also referred to as the frequency dependent "interaural level difference" (ILD) (or "interaural intensity difference" (IID)). Our eardrums are only sensitive to the sound pressure level differences. The phenomena whereby we use one localization clue at lower sound frequencies and another clue at higher frequencies is sometimes referred to as the "Duplex Theory of Sound Localization". This phenomenon was initially postulated by Lord Rayleigh.

Note that these cues will only aid in localizing the sound source's azimuth (the angle between the source and the sagittal plane), not its elevation (the angle between the source and the horizontal plane through both ears), unless the two detectors are positioned at different heights in addition to being separated in the horizontal plane. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors. However, many animals have quite complex variations in the degree of attenuation of a sound receives in travelling from the source to the eardrum: there are variations in the frequency-dependent attenuation with both azimuthal angle and elevation. These can be summarised in the head-related transfer function, or HRTF. As a result, where the sound is wideband (that is, has its energy spread over the audible spectrum), it is possible for an animal to estimate both angle and elevation simultaneously without tilting its head. Of course, additional information can be found by moving the head, so that the HRTF for both ears changes in a way known (implicitly!) by the animal.

In vertebrates, inter-aural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress[1], this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress' theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely correct, as pointed out by Gaskell[2].

The tiny parasitic fly Ormia ochracea has become a model organism in sound localization experiments because of its unique ear. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of nanosecond time differences[3] [4] [these refs appear to support only few-microsecond, not nanosecond, resolution - see talk page] and requiring a new neural coding strategy.[5] Ho[6] showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal’s head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.

[edit] Monaural (filtering) cues

Monaural localization mostly depends on the filtering effects of external structures. In advanced auditory systems, these external filters include the head, shoulders, torso, and outer ear or "pinna", and can be summarized as the head-related transfer function. Sounds are frequency filtered specifically depending on the angle from which they strike the various external filters. The most significant filtering cue for biological sound localization is the pinna notch, a notch filtering effect resulting from destructive interference of waves reflected from the outer ear. The frequency that is selectively notch filtered depends on the angle from which the sound strikes the outer ear. Instantaneous localization of sound source elevation in advanced systems primarily depends on the pinna notch and other head-related filtering. These monaural effects also provide azimuth information, but it is inferior to that gained from binaural cues.

In order to enhance filtering information, many animals have large, specially shaped outer ears. Many also have the ability to turn the outer ear at will, which allows for better sound localization and also better sound detection. Bats and barn owls are paragons of monaural localization in the animal kingdom, and have thus become model organisms.

Processing of head-related transfer functions for biological sound localization occurs in the auditory cortex.

[edit] Distance cues

Neither inter-aural time differences nor monaural filtering information provides good distance localization. Distance can theoretically be approximated through inter-aural amplitude differences or by comparing the relative head-related filtering in each ear: a combination of binaural and filtering information. The most direct cue to distance is sound amplitude, which decays with increasing distance. However, this is not a reliable cue, because in general it is not known how strong the sound source is. In case of familiar sounds, such as speech, there is an implicit knowledge of how strong the sound source should be, which enables a rough distance judgment to be made.

In general, humans are best at judging sound source azimuth, then elevation, and worst at judging distance. Source distance is qualitatively obvious to a human observer when a sound is extremely close (the mosquito in the ear effect), or when sound is echoed by large structures in the environment (such as walls and ceiling). Such echoes provide reasonable cues to the distance of a sound source, in particular because the strength of echoes does not depend on the distance of the source, while the strength of the sound that arrives directly from the sound source becomes weaker with distance. As a result, the ratio of direct-to-echo strength alters the quality of the sound in such a way to which humans are sensitive. In this way consistent, although not very accurate, distance judgments are possible. This method generally fails outdoors, due to a lack of echoes. Still, there are a number of outdoor environments that also generate strong, discrete echoes, such as mountains. On the other hand, distance evaluation outdoors is largely based on the received timbre of sound: short soundwaves (high-pitched sounds) die out sooner, due to their relatively smaller kinetic energy, and thus distant sounds appear duller than normal (lacking in treble).

[edit] Bi-coordinate sound localization in owls

Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne [7] have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.

[edit] ITD and ILD

Owls living above ground must be able to determine the necessary angle of descent, i.e. the elevation, in addition to azimuth (horizontal angle to the sound). This bi-coordinate sound localization is accomplished through two binaural cues: the interaural time difference (ITD) and the interaural level difference (ILD), also known as the interaural intensity difference (IID). The ability in owls is unusual: in mammals like humans, which live in a two dimensional world, ITD and ILD are redundant cues for azimuth.

ITD occurs whenever the distance from the source of sound to the two ears is different, resulting in differences in the arrival times of the sound at the two ears. When the sound source is directly in front of the owl, there is no ITD, i.e. the ITD is zero. In sound localization, ITDs are used as cues for location in the azimuth. ITD changes systematically with azimuth. Sounds to the right arrive first at the right ear; sounds to the left arrive first at the left ear.

In mammals, there is an level difference in sounds at the two ears caused by the sound shadowing effect of the head. But in many species of owls, level differences arise primarily for sounds that are shifted above or below the elevation of the horizonal plane. This is because of the asymmetry in placement of the ear openings in the owl's head, such that sounds from below the owl reach the left ear first and sounds from above reach the right ear first[8]. IID is a measure of the difference in the level of the sound as it reaches each ear. In many owls, IIDs for high-frequency sounds (higher than 4 or 5 kHz) are the principal cues for locating sound elevation.

[edit] Parallel processing pathways in the brain

The axons of the auditory nerve originate from the hair cells of the cochlea in the inner ear. Different sound frequencies are encoded by different fibers of the auditory nerve, arranged along the length of the auditory nerve, but codes for the timing and level of the sound are not segregated within the auditory nerve. Instead, the ITD is encoded by phase locking, i.e. firing at or near a particular phase angle of the sinusoidal stimulus sound wave, and the IID is encoded by spike rate. Both parameters are carried by each fiber of the auditory nerve[9].

The fibers of the auditory nerve innervate both cochlear nuclei in the brainstem, the cochlear nucleus magnocellularis and the cochlear nucleus angularis (see figure). The neurons of the nucleus magnocellularis phase-lock, but are fairly insensitive to variations in sound pressure, while the neurons of the nucleus angularis phase-lock poorly, if at all, but are sensitive to variations in sound pressure. These two nuclei are the starting points of two separate but parallel pathways to the inferior colliculus: the pathway from nucleus magnocellularis processes ITDs, and the pathway from nucleus angularis processes IID.

Parallel processing pathways in the brain for time and level for sound localization in the owl.

In the time pathway, the nucleus laminaris is the first site of binaural convergence. It is here that that the ITD is detected and encoded using neuronal delay lines and coincidence detection, as in the Jeffress model; when phase-locked impulses coming from the left and right ears coincide at a laminaris neuron, the cell fires most strongly. Thus, the nucleus laminaris acts like a delay-line coincidence detector, converting distance traveled to time delay and generating a map of interaural time difference. Neurons from the nucleus laminaris project to the core of the central nucleus of the inferior colliculus and to the anterior lateral lemniscal nucleus.

In the sound level pathway, the posterior lateral lemniscal nucleus is the site of binaural convergence and where IID is processed. Stimulation of the contralateral ear excites and that of the ipsilateral ear inhibits the neurons of the nuclei in each brain hemisphere independently. The degree of excitation and inhibition depends on sound pressure, and the difference between the strength of the inhibitory input and that of the excitatory input determines the rate at which neurons of the lemniscal nucleus fire. Thus, the response of these neurons is a function of the differences in sound pressure between the two ears.

At the lateral shell of the central nucleus of the inferior colliculus, the time and sound pressure pathways converge. The lateral shell projects to the external nucleus, where each space-specific neuron responds to acoustic stimuli only if the sound originates from a restricted area in space, i.e. the receptive field of that neuron. These neurons respond exclusively to binaural signals containing the same ITD and IID that would be created by a sound source located in the neuron’s receptive field. Thus, their receptive fields arise from the neurons’ tuning to particular combinations of ITD and IID, simultaneously in a narrow range. These space-specific neurons can thus form a map of auditory space in which the positions of receptive fields in space are isomorphically projected onto the anatomical sites of the neurons[10].

[edit] Significance of asymmetrical ears for localization of elevation

The ears of many species of owls, including the barn owl (Tyto alba), are asymmetrical. For example, in barn owls, the placement of the two ear flaps (operculi) lying directly in front of the openings to the ear canals is different for each ear. This asymmetry is such that the center of the left ear flap is slightly above a horizontal line passing through the eyes and directed downward, while the center of the right ear flap is slightly below the line and directed upward. In two other species of owls with asymmetrical ears, the saw whet and the long-eared owls, the asymmetry is achieved by very different means: in saw whets, the skull is asymmetrical; in the long-eared owl, the skin structures lying near the ear form asymmetrical entrances to the ear canals, which is achieved by a horizontal membrane. Thus, ear asymmetry seems to have evolved on at least three different occasions among owls. Because owls depend on their sense of hearing for hunting, this convergent evolution in owl ears suggests that asymmetry is important for sound localization in the owl.

Ear asymmetry allows for sound originating from below the eye level to sound louder in the left ear, while sound originating from above the eye level to sound louder in the right ear. Asymmetrical ear placement also causes IID for high frequencies (between 4 kHz and 8 kHz) to vary systematically with elevation, converting IID into a map of elevation. Thus, it is essential for an owl to have the ability to hear high frequencies. Many birds have the neurophysiological machinery to process both ITD and IID, but, because they have small heads and relatively low frequency sensitivity, they use both parameters only for localization in the azimuth. Through evolution, the ability to hear frequencies higher than 3 kHz, the highest frequency of owl flight noise, enabled owls to exploit elevational IIDs, produced by small ear asymmetries that arose by chance, and begun the evolution of more elaborate forms of ear asymmetry[11].

Another demonstration of the importance of ear asymmetry in owls is that, in experiments, owls with symmetrical ears, such as the screech owl (Otus asio) and the great horned owl (Bubo virginianus), could not be trained to locate prey in total darkness, whereas owls with asymmetrical ears could be trained[12].

[edit] Interaural level difference

Interaural level differences (ILDs), sometimes called interaural intensity differences (IID), are differences of the soundpressure level arriving at the two ears; and are important cues that humans and animals use to localise higher frequency sounds. The interaural time difference is another source of information for sound localization. Our ears are only sensitive to sound pressure changes.

Neurons sensitive to ILDs are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears.

In the auditory midbrain nucleus, the inferior colliculus (IC), many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes.

[edit] References

  1. ^ Jeffress, L.A., 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology 41, 35-39.
  2. ^ Gaskell, H., 1983. The precedence effect. Hearing Research 11, 277-303.
  3. ^ Miles RN, Robert D, Hoy RR. Mechanically coupled ears for directional hearing in the parasitoid fly Ormia ochracea. J Acoust Soc Am. 1995 Dec;98(6):3059-70. PMID 8550933 doi:10.1121/1.413830
  4. ^ Robert D, Miles RN, Hoy RR. Directional hearing by mechanical coupling in the parasitoid fly Ormia ochracea. J Comp Physiol [A]. 1996;179(1):29-44. PMID 8965258 doi:10.1007/BF00193432
  5. ^ Mason AC, Oshinsky ML, Hoy RR. Hyperacute directional hearing in a microscale auditory system. Nature. 2001 Apr 5;410(6829):686-90. PMID 11287954 doi:10.1038/35070564
  6. ^ Ho CC, Narins PM. Directionality of the pressure-difference receiver ears in the northern leopard frog, Rana pipiens pipiens. J Comp Physiol [A]. 2006 Apr;192(4):417-29.
  7. ^ Payne, Roger S., 1962. How the Barn Owl Locates Prey by Hearing. The Living Bird, First Annual of the Cornell Laboratory of Ornithology, 151-159.
  8. ^ Knudsen EI. 1981. The hearing of the barn owl. Sci Am 245(6):113-25.
  9. ^ Zupanc, Gunther K.H. Behavioral Neurobiology: An integrative approach. Oxford University Press, New York: 2004, 142-149.
  10. ^ Knudsen, Eric I and Masakazu Konishi. A Neural Map of Auditory Space in the Owl. Science. Vol. 200, 19 May 1978: 795-797.
  11. ^ Konishi, Masakazu and Susan F. Volman, 1994. Adaptations for bi-coordinate sound localization in owls. Neural Basis of Behavioral Adaptations, 1-9.
  12. ^ Payne, Roger S.. Acoustic Location of Prey by Barn Owls (Tyto alba). J Exp Biol. 1971; 54, 535-573.

[edit] See also

[edit] External links

Personal tools