0% found this document useful (0 votes)
72 views

Lecture2 Phonetics

This document provides an overview of phonetics and the International Phonetic Alphabet (IPA). It discusses how phonetics is the study of sounds and their production, while phonology studies sound systems and phonemes. The IPA was created to provide a uniform way to represent sounds across languages. It breaks down speech into discrete sound units and indicates how sounds are produced using articulators like the lips, teeth, tongue, and palate. The document also notes some differences between phonetics and phonology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Lecture2 Phonetics

This document provides an overview of phonetics and the International Phonetic Alphabet (IPA). It discusses how phonetics is the study of sounds and their production, while phonology studies sound systems and phonemes. The IPA was created to provide a uniform way to represent sounds across languages. It breaks down speech into discrete sound units and indicates how sounds are produced using articulators like the lips, teeth, tongue, and palate. The document also notes some differences between phonetics and phonology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Lecture2: Phonetics

Phonetics is the study of sounds. To understand the mechanics of human languages one has to
understand the physiology of the human body. Letters represent sounds in a rather intricate way. This
has advantages and disadvantages. To represent sounds by letters in an accurate and uniform way the
International Phonetic Alphabet (IPA) was created.

We begin with phonology and phonetics. It is important to understand the difference between
phonetics and phonology. Phonetics is the study of actual sounds of human languages, their production
and their perception. It is relevant to linguistics for the simple reason that the sounds are the primary
physical manifestation of language. Phonology on the other hand is the study of sound systems. The
difference is roughly speaking this. There are countless different sounds we can make, but only some
count as sounds of a language, say English. Moreover, as far as English is concerned, many perceptibly
distinct sounds are not considered ‘different’. The letter /p/, for example, can be pronounced in many
different ways, with more emphasis, with more loudness, with different voice onset time, and so on.
From phonetic point of view, these are all different sounds; from a phonological point of view there is
only one (English) sound, or phoneme: [p].

Thedifferenceisveryimportantthoughoftenenoughitisnotevidentwhether a phenomenon is
phonetic in nature or phonological. English, for example, has a basic sound [t]. While from a
phonological point of view there is only one phoneme [t], there are infinitely many actual sounds that
realize this phoneme. So, while there are infinitely many different sounds for any given language there
are only finitely many phonemes, and the upper limit is around 120. English has 40 (see Table 7). The
difference can be illustrated also with music. There is a continuum of pitches, but the piano has only 88
keys, so you can produce only 88 different pitches. The chords of the piano are given, so that the basic
sound colour and pitch cannot be altered. But you can still manipulate the loudness, for example. Sheet
music reflects this state of affairs in the same way as written language. The musical sounds are described
by discrete signs, the keys. Returning now to language: the difference between various different
realizations of the letter /t/, for example, are negligible in English and often enough we cannot even tell
the difference between them. Still, if we recorded the sounds and mapped them out in a spectrogram
we could actually see the difference. (Spectrograms are one

Important instrument in phonetics because they visualize sounds so that you can see what you
often even cannot hear.) Other languages cut the sound continuum in a different way. Not all
realizations of /t/ in English sound good in French, for example. Basically, French speakers pronounce /t/
without aspiration. This means that if we think of the sounds as forming a ‘space’ the so-called basic
sounds of a language occupy some region of that space. These regions vary from one language to
another. Languages are written in alphabets, and many use the Latin alphabet. It turns out that not only
is the Latin alphabet not always suitable for other languages, orthographies are often not a reliable
source for pronunciation. English is a case in point. To illustrate the problems, let us look at the following
tables (taken from [Coulmas, 2003]). Table 1 concerns the values of the letter /x/ in different languages:
As one can see, the correspondence between letters and sounds is not at all uniform. On the other
hand, even in one and the same language the correspondence can be non-uniform. Table 2 lists ways to
represent [@] is English by letters. Basically any of the vowel letters can represent [@]. This mismatch
has various reasons, a particular one being language change and dialectal difference. The sounds of a
language change slowly over time. If we could hear a tape recording of English spoken, say, one or two
hundred years ago in one and the same region, we would surely notice a difference. The orthography
however tends to be conservative. The good side about a stable writing system is that we can (in
principle) read older text seven If we do not know how to pronounce them. Second, languages with
strong dialectal variation often fix writing according to

one of the dialects. Once again this means that documents are understood across dialects, even though
they are read out differently.

I should point out here that there is no unique pronunciation of any letter in a language. More
often than not it has quite distinct values. For example, the letter /p/ sounds quite different in /photo/
as it does in /plus/. In fact, the sound described by /ph/ is the same as the one normally described by /f/
(for example in /flood/). The situation is that we nevertheless ascribe a ‘normal’ value to a letter (which
we use when pronouncing the letter in isolation or in reciting the alphabet). This connection is learned in
school and is part of the writing system, by which I mean more than just the rendering of words into
sequences of letters. Notice a curious fact here. The letter /b/ is pronounced like /bee/ in English, with a
subsequent vowel that is not part of the value of the letter. In Sanskrit, the primitive consonantal letters
represent the consonant plus [a], while the recitation of the letter is nowadays done without it. For
example, the letter for “b” has value [b@] when used ordinarily, while it is recited [b]. If one does not
want a pronunciation with schwa, the letter is augmented by a stroke.

In the sequel I shall often refer to the pronunciation of a letter; by that I mean the standard
value assigned to it in reciting the alphabet, however without the added vowel. This recipe is, I hope,
reasonably clear, though it has shortcomings (the recitation of /w/ reveals little of the actual sound
value).
The disadvantage for the linguist is that the standard orthographies have to be learned (if you
study many different languages this can be a big impediment) and second they do not reveal what is
nevertheless important: the sound quality. For that reason one has agreed on a special alphabet, theso-
called International Phonetic Alphabet (IPA). In principle this alphabet is designed to give an accurate
written transcription of sounds, one that is uniform for all languages. Since the IPA is an international
standard, it is vital that one understands how it works (and can read or write using it). The complete set
of symbols is rather complex, but luckily one does not have to know all of it.

The Analysis of Speech Sounds

First of all, the continuum of speech is broken up into a sequence of discrete units, which we
referred to as sounds. Thus we are analyzing language utterances as sequences of sounds. Right away
we mention that there is an exception. Intonation and stress are an exception to this. The sentences
below are distinct only in intonation (falling pitch versus falling and rising pitch).

(14) You spoke with the manager.

(15) You spoke with the manager?

Also, the word /protest/ has two different pronunciations; when it is a noun the stress is on the
first syllable, when it is a verb it is on the second. Stress and intonation obviously affect the way in which
the sounds are produced (changing loudness and / or pitch), but in terms of decomposition of an
utterance into segments intonation and stress have to be taken apart. We shall return to stress later.
Suffice it to say that in IPA stress is marked not on the vowel but on the syllable (by a ["] before the
stressed syllable), since it is thought to be a property of the syllable. Tone is considered to be a supra
segmental feature, too. It does not play a role in European languages, but for example in languages of
South East Asia (including Chinese and Vietnamese), in languages of Africa and Native American
languages. We shall not deal with tone.

Sounds are produced in the vocal tract. Air is flowing through the mouth and nose and the
characteristics of the sounds are manipulated by several so-called articulators. A rough picture is that
the mouth aperture is changed by moving the jaw, and that the shape of the cavity can be manipulated
by the tongue in many ways. The parts of the body that are involved in shaping the sound, the
articulators, can be active (in which case they move) or passive. The articulators are as follows: oral
cavity, upper lip, lower lip, upper teeth, alveolar ridge (the section of the mouth just behind the upper
teeth stretching to the ‘corner’), and tongue tip, tongue blade (the flexible part of the tongue), tongue
body, tongue
Table 3: IPA consonant column labels

Articulators involved

bilabial the two lips, both active and passive


labiodental active lower lip to passive upper teeth
dental active tongue tip/blade to passive upper teeth
alveolar active tongue tip/blade to passive front part of alveolar
ridge
post alveolar active tongue blade to passive behind alveolar
retroflex active tongue tip raised or curled to passive
postalveolar (difference between postalveolar and
retroflex: blade vs. tip)
palatal tongue blade/body to hard palate behind entire
alveolar ridge
velar active body of tongue to passive soft palate (sometimes
to back of soft palate)
uvular active body of tongue to passive (or active) uvula
pharyngeal active body/root of tongue to passive pharynx
glottal both vocal chords, both active and passive

root, epiglottis (the leaf-like appendage to the tongue in the pharynx), pharynx (the back vertical space
of the vocal tract, between uvula and larynx), hard palate (upper part of the mouth just above the
tongue body in normal position), soft palate or velum (the soft part of the mouth above the tongue, just
behind the hard palate), uvula (the hanging part of the soft palate), and larynx (the part housing the
vocal chords). For most articulators it is clear whether they can be active or passive, so this should not
need further comment. It is evident that the vocal chords play a major role in sounds (they are
responsible for the distinction between voiced and unvoiced), and the sides of the tongue are also used
(in sounds known as laterals). Table3givessomedefinitionsofphonetic features in terms of articulators for
consonants. Column labels here refer to what defines the place of articulation as opposed to the manner
of articulation. The degree of constriction is roughly the distance of the active articulator to the passive
articulator. The degree of constriction plays less of a role in consonants, though it does vary, say,
between full contact [d] and ‘close encounter’ [z], and
it certainly varies during the articulation (for example in affricates [dz] where the tongue retreats in a
slower fashion than with [d]). The manner of articulation combines the degree of constriction together
with the way it changes in time. Table 4 gives an overview of the main terms used in the IPA and Table 5
identifies the row labels of the IPA chart. Vowels differ from consonants in that there is no constriction
of air flow. The notions of active and passive articulator apply. Here we find at least four degrees of
constriction (close, close-mid, open-mid and open), corresponding to the height of the tongue body
(plus degree of mouth aperture). There is a second dimension for the horizontal position of the tongue
body. The combination of these two parameters is often given in the form of a two dimensional
trapezoid, which shows with more accuracy the position of the tongue. There is a third dimension, which
defines the rounding (round versus unrounded, which is usually not marked). We add a fourth
dimension, nasal versus non nasal, depending on whether the air flows partly through the nose or only
through the mouth.

Naming the Sounds


The way to name a sound is by stringing together its attributes. However, there is a distinction
between naming vowels and consonants. First we describe the names of consonants. For example, [p] is
described as a voiceless, bilabial stop, [m] is

called a (voiced) bilabial nasal. The rules are as follows:

(16) voicing place manner

Sometimes other features are added. If we want to describe [ph] we say that it is a voiceless
bilabial aspirated stop. The additional specification ‘aspirated’ is a manner attribute, so it is put after the
place description (but before the attribute ‘stop’, since the latter is a noun). For example, the sequence
‘voiced retroflex fricative’ refers to [ü], as can be seen from the IPA chart.
Vowels on the other hand are always described as ‘vowels’, and all the other features are
attributes. We have for example the description of [y] as ‘high front rounded vowel’. This shows that the
sequence is

(17) height place lip-attitude[nasality] vowel

Nasality is optional. If nothing is said, the vowel is not nasal.

On Strict Transcription

Since IPA tries to symbolize a sound with precision, there is a tension between accuracy and
usefulness. As we shall see later, the way a phoneme is realized changes from environment to
environment. Some of these changes are so small that one needs a trained ear to even hear them. The
question is whether we want the difference to show up in the notation. At first glance the answer seems
to be negative. But two problems arise: (a) linguists sometimes do want to represent the difference and
there should be a way to do that, and (b) a contrast that speakers of one language do not even hear
might turn out to be distinctive and relevant in another. (An example is the difference between English
[d] (alveolar) and a sound where the tongue is put between the teeth (dental). Some languages in India
distinguish these sounds, though I hardly hear a difference.) Thus, on the one hand we need an alphabet
that is highly flexible on the other we do not want to use it always in full glory. This motivates using
various systems of notation, which differ mainly in accuracy. Table7 gives you a list of English speech
sounds and a phonetic symbol that is exact insofar that knowing the IPA would tell an English speaker
exactly what sound is meant by what symbol. (I draw attention however to the sound [a], which
according to the IPA is not used in American English; instead, we find [A].) This is called broad
transcription. The dangers of broad transcription are that a symbol like [p] does not reveal exact details
of which sounds fall under it, it merely tells us that we have a voiceless bilabial stop. Since French broad
transcription might use the same symbol [p] for that we might be tempted to conclude that they are the
same. But they are not.
Thus in addition to broad transcription there exists strict or narrow transcription, which consists
in adding more information (say, whether [p] is pronounced with aspiration or not). Clearly, the
precision of the IPA is limited. Moreover, the more primitive symbols it has the harder it is to memorize.
Therefore, IPA is based on a set of a hundred or so primitive symbols, and a number of diacritics by
which the characteristics of the sound can be narrowed down.

Notes on this section. The book [Rodgers, 2000] gives a fair and illuminating introduction to
phonetics. It is useful to have a look at the active sound chart at:
http://hctv.humnet.ucla.edu/departments/linguistics/VowelsandConsonants/course/chapter1/chapter .
html

You can go there and click at symbols to hear what the corresponding sound is. A very useful source is
also the Wikipedia entry.
http://en.wikipedia.org/wiki/International_Phonetic_Alphabet
Table 7: The Sounds of English

You might also like