Audio signal processing

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Audio signal processing, sometimes referred to as audio processing, is the intentional alteration of auditory signals, or sound. As audio signals may be electronically represented in either digital or analog format, signal processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on the binary representation of that signal.

Human hearing extends from approximately 20 Hz to 20 kHz, determined both by physiology of the human hearing system and by human psychology. These properties are analysed within the field of psychoacoustics.

Contents

[edit] History of audio processing

Audio processing was necessary for early radio broadcasting -- as there were many problems with studio to transmitter links.

[edit] Analog signals

An analog representation is usually electrical; a voltage level represents the air pressure waveform of the sound.

[edit] Digital signals

A digital representation expresses the pressure wave-form as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as microprocessors and computers. Although such a conversion can be prone to loss, most modern audio systems use this approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.[1]

In order to convert the continuous-time analog signal to a discrete-time digital representation, it must be sampled and quantized. Sampling is the division of the signal into discrete intervals at which analog voltage readings will be taken. Quantization is the conversion of the instantaneous analog voltage into a binary representation. Electronically, these functions are performed by an analog-to-digital converter.

The length of the sampling interval determines the maximum frequency that can be encoded. The Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal. Because the human ear cannot perceive frequencies above approximately 20 KHz (strongly depends on the age of the listener), the sampling rate has to be above 40 KHz. Commercial CDs are recorded at 44.1 KHz.

The bit resolution used during the quantization process determines the minimum voltage that can be digitally represented, and thus the digital signal's dynamic range. As the dynamic range of an audio signal is, by definition, limited by noise, the resolution need only be high enough to capture signals above the noise floor.

[edit] Application areas

Processing methods and application areas include storage, level compression, data compression, transmission, enhancement (e.g., equalization, filtering, noise cancellation, echo or reverb removal or addition, etc.)

[edit] Audio Broadcasting

Audio broadcasting (be it for television or audio broadcasting) is perhaps the biggest market segment (and user area) for audio processing products -- globally.[citation needed]

Traditionally the most important audio processing (in audio broadcasting) takes place just before the transmitter. Studio audio processing is limited in the modern era due to digital audio systems (mixers, routers) being pervasive in the studio.

In audio broadcasting, the audio processor must

  • prevent overmodulation, and minimize it when it occurs
  • maximize overall loudness
  • compensate for non-linear transmitters, more common with medium wave and shortwave broadcasting

[edit] References

  1. ^ Zölzer, Udo (1997). Digital Audio Signal Processing. John Wiley and Sons. ISBN 0471972266. 
Personal tools