Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
94 views7 pages

Written Report in Introduction To Linguistic: Saint Michael College, Hindang Leyte

Download as doc, pdf, or txt
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 7

SAINT MICHAEL COLLEGE

HINDANG, LEYTE

Written Report
In
Introduction to
Linguistic
Submitted by:
Kyndie A. Madrazo
Mariecris Lor
Gerardo Monderondo
BSEd- 3 STUDENT

Submitted to:
Junryl T. Corre
SUBJECT INSTRUCTOR
Chapter 9 ( Language Processing : Humans and
Computers )

Human Language Processing

Psycholinguistics: the study of linguistic performance in speech production and


comprehension.

Psycholinguistics focuses on linguistic performance in speech production and


comprehension. We usually don’t have problems producing or understanding sentences
in our language, and we do both without effort or awareness. Some grammatical
sentences are difficult to understand (The horse raced past the barn fell.), and some
ungrammatical sentences are easy to understand (*The baby seems sleeping.)

This means that language processing is more than grammar alone—there are
psychological mechanisms that work with the grammar to allow us to produce and
comprehend language.

Comprehension : The Speech Signal


Speech sounds can be described by their acoustic (or physical) properties. The vibrations
of our vocal cords cause variations of air pressure, and sounds we produce can be
described in terms of:

 Fundamental frequency (pitch): how fast the variations of air pressure occur.
 Intensity: the magnitude of the variations, which determines the loudness of a
sound.
 The quality of a speech sound is determined by the shape of the vocal tract; the
shape affects how the sound waves travel.

Spectograms, or voiceprints, can be created by computers and are used to analyze


speech sounds. Spectograms indicate the intensity, formants (the strongest harmonics
produced by the shape of the vocal tract during production), and pitch of speech sounds
and demonstrate how different speech sounds have recognizably different acoustic
properties.

When we push air through the glots, vibrating vocal cords produce variations
in air pressure.

– The speed of these variations in air pressure determines the fundamental


frequency of the sounds. Fundamental frequency is perceived as pitch by the
hearer.

– The magnitude of the variations (or intensity) determines the loudness of the sound.

• An image of the speech signal is displayed in a spectrogram

• Vowels are indicated by dark bands called


formants.

Parsing of sound
• Categorical perception – We do not perceive linguistic sounds as a continuum

• Duplex perception – Are able to integrate ‘spliced’ parts of a sound played into each
ear.

VOT

• Voice onset time differentiates voiced sounds from voiceless.

The six English plosives [p, b, t, d, k, g], each followed by the vowel [a]. Top picture is of
spectrograms. The y-axis represents frequency range 0-5kHz, with each 1kHz marked by
a

horizontal gray line. The x-axis is time - about 4s overall. The bottom picture is the same
data in time aligned waveforms.

VOT perception

Formants - how
to splice a
sound
Frequency response curves (indicating the preferred resonating frequencies of the vocal
tract). Each of the preferred resonanting frequencies of the vocal tract (each bump in
the frequency response curve) is known as a formant.They are usually referred to as F1,
F2, F3, etc. For example, the formants for a typical adult male saying a schwa: F1 first
formant 500 Hz ,F2 second formant 1500 Hz,F3 third formant 2500 Hz
Duplex perception

• Splice the formants apart.

– Play one set in one ear

– Play the other set in the other ear

• What will happen?

– Hear sound in one

– Chirp in the other


Comprehension: Speech Perception

• Our speech perception mechanisms allow us to understand speech despite the “segmentation
problem” and “lack of invariance problem”

– Normalization procedures let us control for individual differences, speed, accents, etc.

– Knowledge of the underlying phonemic system aids in categorical perception.

Comprehension: Lexical Access and Word Recognition

• Lexical decision tasks measure response time – Listeners respond more slowly to:

• infrequently used words

• possible non-words (rather than impossible nonwords)

• words with larger phonological neighborhoods

• ambiguous words

Serial matching
• Forster 1989 – Serial matching in word mapping

• Use either: – Frequency ordered phonological list,Semantic associative list

– Frequency effects on word recognition

• More frequent words are accessed faster

– Frequency interaction with context

• Context plays a role and makes less frequent words be accessed faster

Priming

• Lexical priming

• Faster decision if target preceded by a semantically related word

• Faster decision if target preceded by a phonologically related word

– As compared to unrelated

Comprehension: Lexical Access and Word Recognition

• Semantic priming: words can be activated by hearing semantically related words

– Response time will be faster on the word doctor if the listener has just heard the word nurse

• Morphological priming: a morpheme of a multimorphemic word primes a related word

– Response time will be faster on the word wool if the listener has just heard the word sheepdog

Speech Perception and Comprehension


The “segmentation problem”: how do listeners carve up the continuous speech signal
into meaningful units? Lexical access (or word recognition) is the process of searching
your lexicon for phonological strings that correspond to words. Stress and intonation
provide clues about structure. The “lack of invariance problem”: how do listeners
recognize different speech sounds when they are used in different contexts and spoken
by different people? Listeners can normalize their perceptions to account for rate of
speech and speaker pitch differences.

Bottom-up and Top-down Models


Understanding language in real time is an impressive feat, and there is a certain amount
of guesswork involved in real-time language comprehension Many psycholinguists
believe that language perception and comprehension involves both:

Top-down processing: proceeding from semantic and syntactic information to the lexical
information from the sensory input. Listeners can predict that if a speaker says the, then
an NP is coming. In experiments, listeners seem to make much use of top-down
information.

Bottom-up processing: moving from the sensory phonetic input to phonemes, then
morphemes, etc. up to semantic interpretation Listeners wait to construct an NP until
they hear the followed by a noun.

Lexical Access and Word Recognition


In order to discover more about lexical access or word recognition, psycholinguists have
devised several experiments:
Lexical decision experiments involve people deciding whether or not a string of letters
or sounds is a word. Frequently used words such as car are responded to more quickly
than infrequent words such as fig. This leads researchers to believe that frequent words
are more easily accessed in the lexicon than infrequent words. A lexical decision about
the word doctor will be faster if it has been preceded by the word nurse. This effect is
called semantic priming and could be due to semantically related words being stored in
the same part of the lexicon. Lexical access experiments show that people retrieve all
the meanings of a word. Naming tasks require subjects to read printed words aloud and
findings that people read regularly spelled words faster than irregularly spelled words
show that:

1. People A) look for the string of letters in their lexicon, and if they find it they can
pronounce the stored representation for it or B) if they don’t recognize it they can sound
it out based on linguistic knowledge

2. The mind notices irregularity

Syntactic Processing
Listeners need to build phrase structure representations of sentences as they hear them
in order to understand the sentence. They must place each incoming word in a
grammatical category and disambiguate messages. Garden path sentences are ones that
require listeners to shift their analysis midway through the sentence. After the child
visited the doctor prescribed a course of injections. Readers will naturally put the doctor
into the slot of direct object for the verb visited, but as the reader goes on they must
change their analysis and recognize the doctor as the subject of the main clause instead.

The mind uses two principles in parsing sentences that lead people to go stray when
encountering garden path sentences:

Minimal attachment: build the simplest structure consistent with the grammar of the
language.

Late closure: attach incoming material to the phrase that is currently being processed.

Memory constraints prevent the easy comprehension of a sentence like: Jack built the
house that the malt that the rat that the cat that the dog worried killed ate lay in.
Performance constraints like this limit the number of sentences we are likely to create
out of the infinite possibilities.

Shadowing tasks involve subjects repeating what they hear as rapidly as possible. Most
people can shadow with a delay of 500 to 800 milliseconds, but some people can
shadow within one syllable (300 milliseconds behind). Fast shadowers correct speech
errors even when told not to, and corrections are more likely to occur when the target
word is predictable based on linguistic context. These experiments provide evidence for
top-down processing and show how impressively fast listeners do grammatical analysis.

Speech Production: Planning Units


Although speech sounds are linearly ordered, slips of the tongue (including
spoonerisms) reveal that speech is conceptualized before it is uttered.

Intended: ad hoc

Actual: odd hack


The vowel sounds [] in the first word and [a] in the second word were reversed. This
type of error reveals that the second word was already planned. Interestingly,
phonological errors primarily occur in content morphemes rather than function
morphemes, and function morphemes are not interchanged like content morphemes.

Word substitutions are seldom random; we tend to accidentally replace a word with a
semantically related word. Sometimes we produce a blend, which is part of one word
and part of another:

splinters/blisters  splisters

edited/annotated  editated

Segments tend to stay in the same position in these blend errors.

Application and Misapplication of Rules


Sometimes speakers also make errors with morphological and syntactic rules. Rules may
be applied to create possible but nonexistent words such as ambigual. Regular rules may
accidentally be applied to irregular words as in swimmed.

In an error such as saying a burly bird instead of an early bird, the appropriate
allomorph (a instead of an) is chosen even though the speaker did not intend to produce
a noun starting with a consonant. This tells us that the rule to choose a or an must apply
after early was accidentally switched to burly.

Nonlinguistic Infuences
Nonlinguistic factors can also contribute to speech production.

Intended utterance: I’ve never heard of classes on Good Friday

Actual utterance: I’ve never heard of classes on April 9th

Good Friday was on April 9th that year, so even though Good Friday and April 9th have
nothing in common phonologically or morphologically, the nonlinguistic association was
enough to prompt such an error.

You might also like