Nothing Special   »   [go: up one dir, main page]

2.0 Information Sources 2.1 Audio Signals: Dr. Ing. Saviour Zammit

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

10/8/2010

2.0 Information Sources


2.1 Audio Signals
Dr. Ing. Saviour Zammit

Audio Information and Transducers

Physical Characteristics of Sound


The Human Sound Production System (HSPS)
The Human Auditory System (HAS)
Spectral Content
Audio Input Transducers
Audio Output Transducers

10/8/2010

Sound
Sound is a vibration of air molecules, caused by a vibrating object
which causes the air molecules to compress and rarefy as they are
pushed and pulled (pressure wise) by the vibrating edge.
The sound waves thus produced travel (or propagate) in a direction
parallel to the compression and rarefaction of the air molecules,
and hence sound wave is a longitudinal wave.
The pressure waves, most often propagate through the air to the
ear, where the sound is analyzed first into frequency bands by the
ear, and then carried by nerves to the brain for interpretation and
action.
Human beings both produce and detect sound and have evolved
sound communications to such a degree that it distinguishes us
from all other life forms on earth.

Physical Characteristics

Sound waves have three temporal


important characteristics
Speed,
amplitude and
period (or frequency)
The period T of a wave is time taken for
one complete repetition of the
waveform.
The frequency is then the reciprocal of
the period, or 1/T
The speed of sound through air depends
on pressure and temperature of the air.
At 1 standard atmosphere, at sea
level and 20o Celsius, it is 343.8 m/s

1.5

1
Amplitude A
0.5
Period T
0
1
-0.5

9 11 13 15 17 19 21 23 25 27 29 31 33 35
Time

-1
Frequency f = 1 / T
-1.5

10/8/2010

Spatial Variations of Sound

The wavelength is the distance


in space through which the
waveform goes through once
complete cycle and is denoted
by the symbol lambda .
The speed of the waveform v
is related to the frequency f
and wavelength by
v = f.

1.5

1
Amplitude
0.5
Wavelength

0
1

11 13 15 17 19 21 23 25 27 29 31 33 35
Distance m

-0.5
-1

Speed v = f.
-1.5

Sound frequencies

The human ear can hear sound frequencies from 20 Hz to some 22,000 Hz.
Animals such as dogs and bats can hear much higher sounds (ultrasound).
This is the range of Audible frequencies and changes with age.
Young people have a larger range than old people, who start to loose
mostly perception of higher frequencies.
If p(t) measures the pressure of a sound wave, for a pure sound tone, we
can write
2
p (t ) = A sin
t
T
A over middle C on an instrument is the pure tone of 460 Hz and its
pressure function is then of the form

p (t ) = A sin (920t )

10/8/2010

Amplitude
The amplitude of a sound wave is interpreted as loudness
by the human auditory system.
The HAS is not linear, in that it can hear very loud sounds as
well as very small sounds, over a range of some 11 to 12
orders of magnitude.
Amplitude is related to the motion of the molecules in the
medium, which is of the order of one thousandth of a
centimeter for a very loud sound, and millionths of a
centimeter for normal sounds.
In order to compare values with large variations of
magnitude we use decibels. Thus the difference in Level in
decibels between two power values P1 and P2 is given by
Level = log10

P1
dB
P2

Sound Intensity
The threshold of sound is one picowatt (1pW) of sound
power per m2 at a listeners location, and all other sounds
are compared to this.
Thus a 100 Watt stereo is 10.log((100*.01)/10-12) or 120dB
at 1m (assuming 1% conversion efficiency from electrical to
sound power).

Level = 10. log10

P1
P2

10/8/2010

Amplitude
Watts
30000
300
3
0.03
0.0003
0.000003
0.00000003
0.0000000003
0.000000000001

dB

source

165
145
125
105
85
65
45
25
0

jet
threshold of pain
factory noise
highway traffic
appliance
converstation
quite room
whisper
threshold of hearing

The Human Sound Production System


The human sound production
organ consists of :
the vocal cords in the throat (Larynx)
the vocal tract from the back of the
mouth to the vocal cords
the nasal and mouth chambers,
the tongue,
the teeth,
the lips and
the brain which orchestrates the whole.

There is also evidence that the


brain uses feedback from the ear
to control the HSPS.

10/8/2010

The Human Sound Production System

We produce both voiced and unvoiced sounds.


The voiced sounds (example the vowels) are caused by the vibration of the
vocal chords, which then resonate in the nasal and mouth chambers, the
later being highly configurable by the action of the jaw and tongue.
Unvoiced sounds (v and f for example) are caused by air forced by the
lungs through the mouth without vibrating the vocal cords and escaping
through constrictions formed by the tongue, the mouth and the teeth.
The male of the species produces sound which has a frequency between
75 Hz and 3000 Hz.
Females produce sounds with frequencies between 150 Hz and 4000 Hz.

Model of the Human Sound Production System

This model is frequently used in sound analysis and synthesis, two


processes which form the basis of a family of sound compression
techniques known as Linear Predictive Coding.

10/8/2010

The Human Auditory System


The Human Auditory System (HAS) consists of two Ears and the
Brain.
The ear is divided into three parts: the outer ear, the middle ear
and the inner ear.
The outer ear consists of the Pinna (ear lobe), the auditory canal,
and the outer side of the tympanic membrane (ear drum).
The middle ear consists of the inner side of the ear drum, the three
ossicles malleus, incus and stapes,
The stapes abuts onto the elliptical window of the inner ear.
The inner ear is filled with fluid and consists of the labyrinth, and
the cochlea.
The labyrinth consists of three tubes at 90 degrees and impart a
sense of balance to human beings.
The cochlea is covered with hairs which are sensitive to vibrations
and trigger nerve cells which transmit impulses to the brain via the
auditory nerve.

The Human Auditory System

10/8/2010

The Human Auditory System


The Pinna collects sound and funnels it into the auditory canal onto
the ear drum.
The Malleus is connected to the ear drum so that it moves with it.
The middle ear is connected to the inside of the mouth by the
Eustachian tube, which allows air to move in and out of the inner ear
to equalize the pressure on the ear drum.
The sound passes through the middle ear via the three small bones of
hearing (ossicles), on to the inner ear which is filled with fluid.
The movement of the fluid in the cochlea stimulates the hair cells
inside it to trigger a nerve impulse which is carried to the brain by the
auditory nerve.
The hairs are of varying size and are sensitive to different sound
frequencies.
The ear thus is a sound spectrum analyzer, and the brain interprets
sounds in the frequency / amplitude domain.
The brain then interprets these nerve impulses as sound.

HAS Sensitivity

The Human Auditory System (HAS) is sensitive to frequencies from 20 Hz


to 20,000 Hz.
Note that the human voice range is 150Hz to 4 KHz.
The Sensitivity (threshold of hearing) depends on frequency, and the ear
is most sensitive to sounds around 2 to 4 KHz

10/8/2010

Audio Input Transducers


Before we can process sound we have to change sound waves
into the electrical domain.
This is done by means of a microphone which is a sound input
transducer.
Four types of microphones in use today are:

The moving coil microphone


The ribbon microphone
The capacitor microphone
The electret microphone

Moving coil Dynamic Microphones

A microphone is a transducer which


can change the air pressure variations
into an electrical signal.
In a moving coil microphone a tin plate
diaphragm A oscillates with the sound,
and moves a coil B in a permanent
magnet C.
The motion of the coil in the magnetic
field induces magnetic currents in the
coil which can be picked up and
amplified.
It is rugged, least sensitive, least
expensive.
It does not require an external supply
of electricity.

10/8/2010

Ribbon Microphone

The Ribbon Microphone consists of a thin


metal foil or ribbon.
When the sound pressure variations hit
the ribbon it vibrates generating an
electrical current.
The device is not rugged, though it is
sensitive.
It can be directional, having good side
rejection.
It Does not require an external supply of
electricity.

Condenser Microphone

The condenser microphone consits of


a capacitor with one plate which can
move under the action of sound
waves.
The change in capacitance causes
charge to move in the circuit formed
by the battery, the capacitor and the
resistor as shown.
A condenser microphone requires an
electrical supply either internal
(battery) or external.
Condenser microphones span the
range from cheap throw-aways to
high-fidelity quality instruments.

10

10/8/2010

Electret Condenser Microphones

In the Electret Condenser


Microphone, the moving plate is
made of a permanently charged
membrane.
When this moves this induces
charges to move in the electrical
connections.
Thus the electret condenser
microphone does not require a
supply of electricity.
Mass production techniques
needed to produce electrets
cheaply don't lend themselves to
the precision needed to produce
the highest quality microphones.

Audio Output Transducers


The loudspeaker reproduces sound waves
from electrical signals.
Two types of loudspeakers are
The moving coil loudspeaker
Electrostatic Loudspeakers

11

10/8/2010

The Moving Coil Loudspeaker


The most frequently used speaker is the
moving coil speaker.
It consists of a moving diaphragm to which
a coil is attached.
The coil is located in the field of a strong
permanent magnet.
The electrical sound current analogue is
passed through the coil, which creates a
motive force on the cone.

Audio Signal & Spectrum - Audacity

12

You might also like