Kamis, 30 September 2010

Psycholinguistics

Language processing
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For the processing of language by computers, see Natural language processing.
Language processing refers to the way human beings process speech or writing and understand it as language. Most recent theories back the idea that this process is made completely by and inside the brain.
Spoken language
Acoustic stimuli are received by the auditive organ and are converted to bioelectric signals on the organ of Corti. These electric impulses are then transported through scarpa's ganglion (vestibulocochlear nerve) to the primary auditory cortex, on both hemispheres. Each hemisphere treats it differently, nevertheless: while the left side recognizes distinctive parts such as phonemes, the right side takes over prosodic characteristics and melodic information.
The signal is then transported to Wernicke's area on the left hemisphere (the information that was being processed on the right hemisphere is able to cross through inter-hemispheric axons), where the already noted analysis takes part.
From this area, the signal is taken to Broca's area through what is called the arcuate fasciculus. Broca's area is in charge of interpreting the information provided by Wernicke's area (using the pars triangularis) and transmitting information to the closely located motor-related areas of the brain for production of speech (relying on the pars opercularis).
Written language
Written language may work in a fairly similar way, only using the primary visual cortex as an input pathway instead of the auditory cortex. However, assuming the separate input pathways, it is still undetermined whether the two means of processing, utilize the same neurological resources through a common gateway or whether there are dedicated cortical regions for each functionality.

Scarpa's ganglion
The vestibular nerve ganglion (also called Scarpa's ganglion) is the ganglion of the vestibular nerve. It contains the cell bodies of the bipolar primary afferent neurons whose peripheral processes form synaptic contact with hair cells of the vestibular sensory end organs.
It is named for Antonio Scarpa.[1][2]
At birth, it is already close to its final size.[3]
Primary auditory cortex
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The primary auditory cortex is the region of the brain that is responsible for processing of auditory (sound) information. It is located on the temporal lobe, and performs the basics of hearing; pitch and volume.
Function
As with other primary sensory cortical areas, auditory sensations reach perception only if received and processed by a cortical area. Evidence for this comes from lesion studies in human patients who have sustained damage to cortical areas through tumors or strokes, or from animal experiments in which cortical areas were deactivated by cooling or locally applied drug treatment. Damage to the Primary Auditory Cortex in humans leads to a loss of any 'awareness' of sound, but an ability to react reflexively to sounds remains as there is a great deal of subcortical processing in the auditory brainstem and midbrain.
Neurons in the auditory cortex are organized according to the frequency of sound to which they respond best. Neurons at one end of the auditory cortex respond best to low frequencies; neurons at the other respond best to high frequencies. There are multiple auditory areas (much like the multiple areas in the visual cortex), which can be distinguished anatomically and on the basis that they contain a complete "frequency map." The purpose of this frequency map (known as a tonotopic map) is unknown, and is likely to reflect the fact that the cochlea is arranged according to sound frequency. The auditory cortex is involved in tasks such as identifying and segregating auditory "objects" and identifying the location of a sound in space.
Human brain scans have indicated that a peripheral bit of this brain region is active when trying to identify musical pitch. Individual cells consistently get excited by sounds at specific frequencies, or multiples of that frequency.
The auditory cortex is an important yet ambiguous part of the hearing process. When the sound pulses pass into the cortex the specifics of what exactly takes place are unclear. Distinguished scientist and musician James Beament puts it into perspective when he writes, “The cortex is so complex that the most we may ever hope for is to understand it in principle, since the evidence we already have suggests that no two cortices work in precisely the same way."[1]
In hearing process, multiple sounds are being absorbed simultaneously. The role of the auditory system is to decide which components from the sound link. Many have surmised that this linking is based on location of sounds; however, there are numerous distortions of sound when reflected off different mediums, which makes this thinking unlikely. Instead, the auditory cortex forms groupings based on other more of the reliable, fundamentals. In music for example, this would include harmony, timing, and pitch.[2]
The primary auditory cortex is about the same as Brodmann areas 41 and 42. It lies in the posterior half of the superior temporal gyrus and also dives into the lateral sulcus as the transverse temporal gyri (also called Heschl's gyri).
The primary auditory cortex is located in the temporal lobe. There are additional areas of the human cerebral cortex that are involved in processing sound, in the frontal and parietal lobes. Animal studies indicate that auditory fields of the cerebral cortex receive ascending input from the auditory thalamus, and that they are interconnected on the same and on the opposite cerebral hemispheres.The auditory cortex is composed of fields, which differ from each other in both structure and function.[3]
The number of fields varies in different species, from as few as 2 in rodents to as many as 15 in the rhesus monkey. The number, location, and organization of fields in the human auditory cortex are not known at this time. What is known about the human auditory cortex comes from a base of knowledge gained from studies in mammals, including primates, used to interpret electrophysiologic tests and functional imaging studies of the brain in humans.
When each instrument of the symphony orchestra or the jazz band plays the same note, the quality of each sound is different — but the musician perceives each note as having the same pitch. The neurons of the auditory cortex of the brain are able to respond to pitch. Studies in the marmoset monkey have shown that pitch-selective neurons are located in a cortical region near the anterolateral border of the primary auditory cortex. This location of a pitch-selective area has also been identified in recent functional imaging studies in humans.[4][5]
The auditory cortex does not just receive input from lower centers and the ear, but also provides it.
Brodmann area 41
This area is also known as anterior transverse temporal area 41 (H). It is a subdivision of the cytoarchitecturally-defined temporal region of cerebral cortex, occupying the anterior transverse temporal gyrus (H) in the bank of the lateral sulcus on the dorsal surface of the temporal lobe. Brodmann area 41 is bounded medially by the parainsular area 52 (H) and laterally by the posterior transverse temporal area 42 (H) (Brodmann-1909).
Brodmann area 42
This area is also known as posterior transverse temporal area 42 (H). It is a subdivision of the cytoarchitecturally-defined temporal region of cerebral cortex, located in the bank of the lateral sulcus on the dorsal surface of the temporal lobe. Brodmann area 42 is bounded medially by the anterior transverse temporal area 41 (H) and laterally by the superior temporal area 22 (Brodmann-1909).
Relationship to auditory system


Areas of localization on lateral surface of hemisphere. Motor area in red. Area of general sensations in blue. Auditory area in green. Visual area in yellow.
The auditory cortex is the most highly organized processing unit of sound in the brain. This cortex area is the neural crux of hearing, and, in humans, language and music.
The auditory cortex is divided into three separate parts, the primary, secondary and tertiary auditory cortex. These structures are formed concentrically around one another, with the primary AC in the middle and the tertiary AC on the outside.
The primary auditory cortex is tonotopically organized, which means that certain cells in the auditory cortex are sensitive to specific frequencies. This is a fascinating function which has been preserved throughout most of the audition circuit. This area of the brain “is thought to identify the fundamental elements of music, such as pitch and loudness. This makes sense as this is the area which receives direct input from the medial geniculate nucleus of the thalamus. The secondary auditory cortex has been indicated in the processing of “harmonic, melodic and rhythmic patterns.” The tertiary auditory cortex supposedly integrates everything into the overall experience of music.[6]
An evoked response study of congenitally deaf kittens by Klinke et al. utilized field potentials to measure cortical plasticity in the auditory cortex. These kittens were stimulated and measured against a control or un-stimulated congenitally deaf cat (CDC) and normal hearing cats. The field potentials measured for artificially stimulated CDC was eventually much stronger than that of a normal hearing cat.[7] This is in concordance with Eckart Altenmuller’s study where it was observed that students who received musical instruction had greater cortical activation than those who did not.[8]
The auditory cortex exhibits some strange behavior pertaining to the gamma wave frequency. When subjects are exposed to three or four cycles of a 40 hertz click, an abnormal spike appears in the EEG data, which is not present for other stimuli. The spike in neuronal activity correlating to this frequency is not restrained to the tonotopic organization of the auditory cortex. It has been theorized that this is a “resonant frequency” of certain areas of the brain, and appears to affect the visual cortex as well.[9]
Gamma band activation (20 to 40 Hz) has been shown to be present during the perception of sensory events and the process of recognition. Kneif et al., in their 2000 study, presented subjects with eight musical notes to well known tunes, such as Yankee Doodle and Frere Jacques. Randomly, the sixth and seventh notes were omitted and an electroencephalogram, as well as a magnetoencephalogram were each employed to measure the neural results. Specifically, the presence of gamma waves, induced by the auditory task at hand, were measured from the temples of the subjects. The OSP response, or omitted stimulus response, was located in a slightly different position; 7 mm more anterior, 13 mm more medial and 13 mm more superior in respect to the complete sets. The OSP recordings were also characteristically lower in gamma waves, as compared to the complete musical set. The evoked responses during the sixth and seventh omitted notes are assumed to be imagined, and were characteristically different, especially in the right hemisphere.[10] The right auditory cortex has long been shown to be more sensitive to tonality, while the left auditory cortex has been shown to be more sensitive to minute sequential differences in sound specifically speech.
Hallucinations have been shown to produce oscillations which are parallel (although not exactly the same as) the gamma frequency range. Sperling showed in his 2004 study that auditory hallucinations produce band wavelengths in the range of 12.5–30 Hz. The bands occurred in the left auditory cortex of a schizophrenic and were controlled against 13 controls (18) . This aligns with the studies of people remembering a song in their minds; they do not perceive any sound, but experience the melody, rhythm and overall experience of sound. When schizophrenics experience hallucinations, it is the primary auditory cortex which becomes active. This is characteristically different from remembering a sound stimulus, which only faintly activates the tertiary auditory cortex.[11] By deduction, an artificial stimulation of the primary auditory cortex should elicit an incredibly real auditory hallucination. The termination of all audition and music into the tertiary auditory cortex creates a fascinating nexus of aural information. If this theory is true, it would be interesting to study a subject with a damaged, TAC or one with artificially suppressed function. This would be very difficult to do as the tertiary cortex is simply a ring around the secondary, which is a ring around the primary AC.
Tone is perceived in more places than just the auditory cortex; one specifically fascinating area is the rostromedial prefrontal cortex.[12] Janata et al., in their 2002 study, explored the areas of the brain which were active during tonality processing, by means of the fMRI technique. The result of which displayed several areas which are not normally considered to be part of the audition process. The rostromedial prefrontal cortex is a subsection of the medial prefrontal cortex, which projects to the amygdala, and is thought to aid in the inhibition of negative emotion.[13] The medial prefrontal cortex is thought to be the core developmental difference between the impulsive teenager and the calm adult. The rostromedial prefrontal cortex is tonality sensitive, meaning it is activated by the tones and frequencies of resonant sounds and music.

Phoneme
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the speech unit. For the JavaME library, see phoneME.
In a language or dialect, a phoneme (from the Greek: φώνημα, phōnēma, "a sound uttered") is the smallest segmental unit of sound employed to form meaningful contrasts between utterances[1].
Thus a phoneme is a group of slightly different sounds which are all perceived to have the same function by speakers of the language or dialect in question. An example of a phoneme is the /k/ sound in the words kit and skill. (In transcription, phonemes are placed between slashes, as here.) Even though most native speakers don't notice this, in most dialects, the k sounds in each of these words are actually pronounced differently: they are different speech sounds, or phones (which, in transcription, are placed in square brackets). In our example, the /k/ in kit is aspirated, [kʰ], while the /k/ in skill is not, [k]. The reason why these different sounds are nonetheless considered to belong to the same phoneme in English is that if an English-speaker used one instead of the other, the meaning of the word would not change: using [kʰ] in skill might sound odd, but the word would still be recognized. By contrast, some other phonemes could be substituted (creating a minimal pair) which would cause a change in meaning: producing words like still (substituting /t/), spill (substituting /p/) and swill (substituting /w/). These other sounds (/t/, /p/ and /w/) are, in English, different phonemes. In some languages, however, [kʰ] and [k] are different phonemes, and are perceived as such by the speakers of those languages. Thus, in Icelandic, /kʰ/ is the first sound of kátur 'cheerful', while /k/ is the first sound of gátur 'riddles'.
In some languages, each letter in the spelling system represents one phoneme. However, in English spelling there is a poor match between spelling and phonemes. For example, the two letters sh represent the single phoneme /ʃ/, while the letters k and c can both represent the phoneme /k/ (as in kit and cat).
Phones that belong to the same phoneme, such as [t] and [tʰ] for English /t/, are called allophones. A common test to determine whether two phones are allophones or separate phonemes relies on finding minimal pairs: words that differ by only the phones in question. For example, the words tip and dip illustrate that [t] and [d] are separate phonemes, /t/ and /d/, in English, whereas the lack of such a contrast in Korean (/tʰata/ is pronounced [tʰada], for example) indicates that in this language they are allophones of a phoneme /t/.
Some linguists (such as Roman Jakobson, Morris Halle, and Noam Chomsky) consider phonemes to be further decomposable into features, such features being the true minimal constituents of language. Features overlap each other in time, as do suprasegmental phonemes in oral language and many phonemes in sign languages. Features could be designated as acoustic (Jakobson) or articulatory (Halle & Chomsky) in nature.
Background and related ideas
The term phonème was reportedly first used by A. Dufriche-Desgenettes in 1873, but it referred only to a speech sound. The term phoneme as an abstraction was developed by the Polish linguist Jan Niecisław Baudouin de Courtenay and his student Mikołaj Kruszewski during 1875-1895. The term used by these two was fonema, the basic unit of what they called psychophonetics. The concept of the phoneme was then elaborated in the works of Nikolai Trubetzkoi and others of the Prague School (during the years 1926-1935), and in those of structuralists like Ferdinand de Saussure, Edward Sapir, and Leonard Bloomfield. Some structuralists wished to eliminate a cognitive or psycholinguistic function for the phoneme.
Later, it was also used in generative linguistics, most famously by Noam Chomsky and Morris Halle, and remains central to many accounts of the development of modern phonology. As a theoretical concept or model, though, it has been supplemented and even replaced by others.
Some languages make use of tone for phonemic distinction. In this case, the tones used are called tonemes. Some languages distinguish words made up of the same phonemes (and tonemes) by using different durations of some elements, which are called chronemes. However, not all scholars working on languages with distinctive duration use this term.
The distinction between phonetic and phonemic systems gave rise of Kenneth Pike's concepts of Emic and etic description.
Notation
A transcription that only indicates the different phonemes of a language is said to be phonemic. In languages that are morphophonemic (vowels in particular)[clarification needed], pronunciations that correspond to the canonical alphabet pronunciations are called alphaphonemic. Such transcriptions are enclosed within virgules (slashes), / /; these show that each enclosed symbol is claimed to be phonemically meaningful. On the other hand, a transcription that indicates finer detail, including allophonic variation like the two English L's, is said to be phonetic, and is enclosed in square brackets, [ ].
The common notation used in linguistics employs virgules (slashes) (/ /) around the symbol that stands for the phoneme. For example, the phoneme for the initial consonant in the word "phoneme" would be written as /f/. In other words, the graphemes are , but this digraph represents one sound /f/. Allophones, more phonetically specific descriptions of how a given phoneme might be commonly instantiated, are often denoted in linguistics by the use of diacritical or other marks added to the phoneme symbols and then placed in square brackets ([ ]) to differentiate them from the phoneme in slant brackets (/ /). The conventions of orthography are then kept separate from both phonemes and allophones by the use of angle brackets < > to enclose the spelling.
The symbols of the International Phonetic Alphabet (IPA) and extended sets adapted to a particular language are often used by linguists to write phonemes of oral languages, with the principle being one symbol equals one categorical sound. Due to problems displaying some symbols in the early days of the Internet, systems such as X-SAMPA and Kirshenbaum were developed to represent IPA symbols in plain text. As of 2004, any modern web browser can display IPA symbols (as long as the operating system provides the appropriate fonts), and we use this system in this article.
Usually, long vowels and consonants are represented either by a length indicator or doubling of the symbol in question.
Examples
Examples of phonemes in the English language include consonant plosives like /p/ and /b/. These two are most often written consistently with one letter for each sound. These phonemes, however, might not be so apparent in written English, for example when they are typically represented by a group of more than one letter, called a digraph, like (pronounced /ʃ/) or (pronounced /tʃ/).
For a list of the phonemes in the English language, see IPA for English.
Two sounds which are allophones (sound variants belonging to the same phoneme) in one language may belong to separate phonemes in another language or dialect. In English, for example, /p/ has aspirated and non-aspirated allophones:aspirated as in /pɪn/, and non-aspirated as in /spɪn/. However, in many languages (e. g. Chinese), aspirated /pʰ/ is a phoneme distinct from unaspirated /p/. As another example, there is no distinction between [r] and [l] in Japanese: there is only one /r/ phoneme, though it has various allophones that can sound more like [l], [ɾ], or [r] to English speakers. The sounds [z] and [s] are distinct phonemes in English, but allophones in some variants of Spanish like 'andaluz'. The sounds [n] (as in run) and [ŋ] (as in rung) are also sometimes considered phonemes in English, but allophones in Italian and Spanish.
An important phoneme is the chroneme, a phonemically-relevant extension of the duration of a consonant or vowel. Some languages or dialects such as Finnish or Japanese allow chronemes after both consonants and vowels. Others, like Australian English use it after only one (in the case of Australian, vowels).
Restricted phonemes
A restricted phoneme is a phoneme that can only occur in a certain environment: There are restrictions as to where it can occur. English has several restricted phonemes:
• /ŋ/, as in sing, occurs only at the end of a syllable, never at the beginning (in many other languages, such as Swahili or Thai, /ŋ/ can appear word-initially).
• /h/ occurs only before vowels and at the beginning of a syllable, never at the end (a few languages, such as Arabic, or Romanian allow /h/ syllable-finally).
• In many American dialects with the cot-caught merger, /ɔ/ occurs only before /r/, /l/, and in the diphthong /ɔɪ/.
• In non-rhotic dialects, /r/ can only occur before a vowel, never at the end of a word or before a consonant.
• Under most interpretations, /w/ and /j/ occur only before a vowel, never at the end of a syllable. However, many phonologists interpret a word like boy as either /bɔɪ/ or /bɔj/.
Biuniqueness
Biuniqueness is a property of the phoneme in classic structuralist phonemics. The biuniqueness definition states that every phonetic allophone must unambiguously be assigned to one and only one phoneme. In other words, there is a many-to-one allophone-to-phoneme mapping instead of a many-to-many mapping.
The notion of biuniqueness was controversial among some pre-generative linguists and was prominently challenged by Morris Halle and Noam Chomsky in the late 1950s and early 1960s.
The unworkable aspects of the concept soon become apparent if you consider the phenomenon of flapping in North American English. In the right environment, this flapping can change either /t/ or /d/ into the allophone [ɾ] for many affected speakers. Here, one allophone is clearly assigned to two phonemes.
Neutralization, archiphoneme, underspecification
Main article: Underspecification
Phonemes that are contrastive in certain environments may not be contrastive in all environments. In the environments where they don't contrast, the contrast is said to be neutralized.
In English there are three nasal phonemes, /m, n, ŋ/, as shown by the minimal triplet,
/sʌm/ sum
/sʌn/ sun
/sʌŋ/ sung
With rare exceptions, these phonemes are not contrastive before plosives such as /p, t, k/ within the same morpheme. Although all three phones appear before plosives, for example in limp, lint, link, only one of these may appear before each of the plosives. That is, the /m, n, ŋ/ distinction is neutralized before each of the plosives /p, t, k/:
• Only /m/ occurs before /p/,
• only /n/ before /t/, and
• only /ŋ/ before /k/.
Thus these phonemes are not contrastive in these environments, and according to some theorists, there is no evidence as to what the underlying representation might be. If we hypothesize that we are dealing with only a single underlying nasal, there is no reason to pick one of the three phonemes /m, n, ŋ/ over the other two.
(In some languages there is only one phonemic nasal anywhere, and due to obligatory assimilation, it surfaces as [m, n, ŋ] in just these environments, so this idea is not as far-fetched as it might seem at first glance.)
In certain schools of phonology, such a neutralized distinction is known as an archiphoneme (Nikolai Trubetzkoy of the Prague school is often associated with this analysis). Archiphonemes are often notated with a capital letter. Following this convention, the neutralization of /m, n, ŋ/ before /p, t, k/ could be notated as |N|, and limp, lint, link would be represented as |lɪNp, lɪNt, lɪNk|. (The |pipes| indicate underlying representation.) Other ways this archiphoneme could be notated are |m-n-ŋ|, {m, n, ŋ}, or |n*|.
Another example from American English is the neutralization of the plosives /t, d/ following a stressed syllable. Phonetically, both are realized in this position as [ɾ], a voiced alveolar flap. This can be heard by comparing betting with bedding.
[bɛt] bet
[bɛd] bed
with the suffix -ing:
[ˈbɛɾɪŋ] betting
[ˈbɛɾɪŋ] bedding
Thus, one cannot say whether the underlying representation of the intervocalic consonant in either word is /t/ or /d/ without looking at the unsuffixed form. This neutralization can be represented as an archiphoneme |D|, in which case the underlying representation of betting or bedding could be |ˈbɛDɪŋ|.
Another way to talk about archiphonemes involves the concept of underspecification: phonemes can be considered fully specified segments while archiphonemes are underspecified segments. In Tuvan, phonemic vowels are specified with the articulatory features of tongue height, backness, and lip rounding. The archiphoneme |U| is an underspecified high vowel where only the tongue height is specified.
phoneme/
archiphoneme height backness roundedness
/i/ high front unrounded
/ɯ/ high back unrounded
/u/ high back rounded
|U| high
Whether |U| is pronounced as front or back and whether rounded or unrounded depends on vowel harmony. If |U| occurs following a front unrounded vowel, it will be pronounced as the phoneme /i/; if following a back unrounded vowel, it will be as an /ɯ/; and if following a back rounded vowel, it will be an /u/. This can been seen in the following words:
-|Um| 'my' (the vowel of this suffix is underspecified)
|idikUm| → [idikim] 'my boot' (/i/ is front & unrounded)
|xarUm| → [xarɯm] 'my snow' (/a/ is back & unrounded)
|nomUm| → [nomum] 'my book' (/o/ is back & rounded)
Minimal contrastive units in sign languages
In sign languages, the basic elements of gesture and location were formerly called cheremes (or cheiremes), but general usage changed to phoneme. Tonic phonemes are sometimes called tonemes, and timing phonemes chronemes.
In sign languages, phonemes may be classified as Tab (elements of location, from Latin tabula), Dez (the hand shape, from designator), Sig (the motion, from signation), and with some researchers, Ori (orientation). Facial expressions and mouthing are also phonemic.
There is one published set of phonemic symbols for sign language, the Stokoe notation, used for linguistic research and originally developed for American Sign Language. Stokoe notation has since been applied to British Sign Language by Kyle and Woll, and to Australian Aboriginal sign languages by Adam Kendon. Other sign notations, such as the Hamburg Notation System and SignWriting, are phonetic scripts capable of writing any sign language. However, because they are not constrained by phonology, they do not yield a specific spelling for a sign. The SignWriting form, for example, will be different depending on whether the signer is left or right-handed, despite the fact this makes no difference to the meaning of the sign.
Phonological extremes
Of all the phonemes human vocal folds can produce, different languages vary considerably in the number of these sounds that are considered to be distinctive phonemes in the speech of that language. Ubyx and Arrernte have only two phonemic vowels, while at the other extreme, the Bantu language Ngwe has 14 vowel qualities, 12 of which may occur long or short, making 26 oral vowels, plus 6 nasalized vowels, long and short, making a total of 38 vowels; while !Xóõ achieves 31 pure vowels, not counting its additional variation by vowel length, by varying the phonation. Rotokas has only six consonants, while !Xóõ has somewhere in the neighborhood of 77, and Ubyx 81. French has no phonemic tone or stress, while several of the Kam-Sui languages have nine tones, and one of the Kru languages, Wobe, has been claimed to have 14, though this is disputed. The total phonemic inventory in languages varies from as few as eleven in Rotokas to as many as 112 in !Xóõ (including four tones). The English language uses a rather large set of 13 to 21 vowels, including diphthongs, though its 22 to 26 consonants are close to average. (There are 21 consonant and five vowel letters in the English alphabet, but this does not correspond to the number of consonant and vowel sounds.)
The most common vowel system consists of the five vowels /i/, /e/, /a/, /o/, /u/. The most common consonants are /p/, /t/, /k/, /m/, /n/. Very few languages lack any of these: Arabic lacks /p/, standard Hawaiian lacks /t/, Mohawk and Tlingit lack /p/ and /m/, Hupa lacks both /p/ and a simple /k/, colloquial Samoan lacks /t/ and /n/, while Rotokas and Quileute lack /m/ and /n/.
Prosody (linguistics)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In linguistics, prosody (pronounced /ˈprɒsədi/, PROSS-ə-dee) is the rhythm, stress, and intonation of speech. Prosody may reflect various features of the speaker or the utterance: the emotional state of a speaker; whether an utterance is a statement, a question, or a command; whether the speaker is being ironic or sarcastic; emphasis, contrast, and focus; or other elements of language that may not be encoded by grammar or choice of vocabulary.
Acoustic attributes of prosody
In terms of acoustics, the prosodics of oral languages involve variation in syllable length, loudness, pitch, and the formant frequencies of speech sounds. In sign languages, prosody involves the rhythm, length, and tension of gestures, along with mouthing and facial expressions. Prosody is typically absent in writing, which can occasionally result in reader misunderstanding. Orthographic conventions to mark or substitute for prosody include punctuation (commas, exclamation marks, question marks, scare quotes, and ellipses), and typographic styling for emphasis (italic, bold, and underlined text).
The details of a language's prosody depend upon its phonology. For instance, in a language with phonemic vowel length, this must be marked separately from prosodic syllable length. In similar manner, prosodic pitch must not obscure tone in a tone language if the result is to be intelligible. Although tone languages such as Mandarin have prosodic pitch variations in the course of a sentence, such variations are long and smooth contours, on which the short and sharp lexical tones are superimposed. If pitch can be compared to ocean waves, the swells are the prosody, and the wind-blown ripples in their surface are the lexical tones, as with stress in English. The word dessert has greater stress on the second syllable, compared to the noun desert, which has greater stress on the first; but this distinction is not obscured when the entire word is stressed by a child demanding "Give me dessert!" Vowels in many languages are likewise pronounced differently (typically less centrally) in a careful rhythm or when a word is emphasized, but not so much as to overlap with the formant structure of a different vowel. Both lexical and prosodic information are encoded in rhythm, loudness, pitch, and The prosodic domain
Prosodic features are suprasegmental. They are not confined to any one segment, but occur in some higher level of an utterance. These prosodic units are the actual phonetic "spurts", or chunks of speech. They need not correspond to grammatical units such as phrases and clauses, though they may; and these facts suggest insights into how the brain processes speech.
Prosodic units are marked by phonetic cues, such as a coherent pitch contour – or the gradual decline in pitch and lengthening of vowels over the duration of the unit, until the pitch and speed are reset to begin the next unit. Breathing, both inhalation and exhalation, seems to occur only at these boundaries where the prosody resets.
"Prosodic structure" is important in language contact and lexical borrowing. For example, in Modern Hebrew, the XiXéX verb-template is much more productive than the XaXáX verb-template because in morphemic adaptations of non-Hebrew stems, the XiXéX verb-template is more likely to retain — in all conjugations throughout the tenses — the prosodic structure (e.g., the consonant clusters and the location of the vowels) of the stem.[1]
Prosody and emotion
Emotional prosody is the expression of feelings using prosodic elements of speech. It was recognized by Charles Darwin in The Descent of Man as predating the evolution of human language: "Even monkeys express strong feelings in different tones – anger and impatience by low, – fear and pain by high notes."[2] Native speakers listening to actors reading emotionally neutral text while projecting emotions correctly recognized happiness 62% of the time, anger 95%, surprise 91%, sadness 81%, and neutral tone 76%. When a database of this speech was processed by computer, segmental features allowed better than 90% recognition of happiness and anger, while suprasegmental prosodic features allowed only 44%–49% recognition. The reverse was true for surprise, which was recognized only 69% of the time by segmental features and 96% of the time by suprasegmental prosody.[3] In typical conversation (no actor voice involved), the recognition of emotion may be quite low, of the order of 50%, hampering the complex interrelationship function of speech advocated by some authors.[4]
Brain location of prosody
An aprosodia is an acquired or developmental impairment in comprehending or generating the emotion conveyed in spoken language.
Producing these nonverbal elements requires intact motor areas of the face, mouth, tongue, and throat. This area is associated with Brodmann areas 44 and 45 (Broca's area) of the left frontal lobe. Damage to areas 44/45 produces motor aprosodia, with the nonverbal elements of speech being disturbed (facial expression, tone, rhythm of voice).
Understanding these nonverbal elements requires an intact and properly functioning Brodmann area 22 (Wernicke's area) in the right hemisphere. Right-hemispheric area 22 aids in the interpretation of prosody, and damage causes sensory aprosodia, with the patient unable to comprehend changes in voice and body language.
Prosody is dealt with by a right-hemisphere network that is largely a mirror image of the left perisylvian zone. Damage to the right inferior frontal gyrus causes a diminished ability to convey emotion or emphasis by voice or gesture, and damage to right superior temporal gyrus causes problems comprehending emotion or emphasis in the voice or gestures of others.
Wernicke's area
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Wernicke's area ("Wernicke" English pronunciation: /ˈvɛərnɨkə/ or /ˈvɛərnɨki/; German: [ˈvɛʁniːkə]) is one of the two parts of the cerebral cortex linked since the late nineteenth century to speech (the other is the Broca's area). It is involved in the understanding of written and spoken language. It is traditionally considered to consist of the posterior section of the superior temporal gyrus in the dominant cerebral hemisphere (which is the left hemisphere in about 90% of people).
Location
The Wernickes area is classically located as the posterior section of the superior temporal gyrus (STG) in the left (or dominant) cerebral hemisphere. This area encircles the auditory cortex on the Sylvian fissure (part of the brain where the temporal lobe and parietal lobe meet). This area is neuroanatomically described as the posterior part of Brodmann area 22.
However, there is an absence of consistent definitions as to its location.[1] Some identify it with the unimodal auditory association in the superior temporal gyrus anterior to the primary auditory cortex.[2] Others include also adjacent parts of the heteromodal cortex in BA 39 and BA40 in the parietal lobe.[3]
While previously thought to connect Wernicke's area and Broca's area, new research demonstrates that the arcuate fasciculus instead connects to posterior receptive areas with premotor/motor areas, and not to Broca's area.[4]
Wernicke and aphasia
Wernicke's area is named after Carl Wernicke, a German neurologist and psychiatrist who, in 1874, hypothesized a link between the left posterior section of the superior temporal gyrus and the reflexive mimicking of words and their syllables that associated the sensory and motor images of spoken words.[5]. He did this on the basis of the location of brain injuries that caused aphasia. Receptive aphasia in which such abilities are preserved is now sometimes called Wernicke's aphasia. In this condition there is a major impairment of language comprehension, while speech retains a natural-sounding rhythm and a relatively normal syntax. Language as a result is largely meaningless (a condition sometimes called fluent or jargon aphasia).
While neuroimaging and lesion evidence generally support the idea that malfunction of or damage to Wernicke's area is common in people with receptive aphasia, this is not always so. Some people may use the right hemisphere for language, and isolated damage of Wernicke's area cortex (sparing white matter and other areas) may not cause severe receptive aphasia.[1][6]. Even when patients with Wernicke's area lesions have comprehension deficits, these are usually not restricted to language processing alone. For example, one study found that patients with posterior lesions also had trouble understanding nonverbal sounds like animal and machine noises [7]. In fact, for Wernicke's area, the impairments in nonverbal sounds were statistically stronger than for verbal sounds.
Right homologous area
Research using Transcranial magnetic stimulation suggests that the area corresponding to the Wernicke’s area in the non-dominant cerebral hemisphere has a role in processing and resolution of subordinate meanings of ambiguous words—such as (‘‘river’’) when given the ambiguous word (‘‘bank’’). In contrast, the Wernicke's area in the dominant hemisphere processes dominant word meanings (‘‘teller’’ given ‘‘bank’’).[8]
Modern views
Neuroimaging suggests the functions earlier attributed to the Wernicke's area occur more broadly in the temporal lobe and indeed happen also in the Broca's area.
“ There are some suggestions that middle and inferior temporal gyri and basal temporal cortex reflect lexical processing ... there is consensus that the STG from rostral to caudal fields and the STS constitute the neural tissue in which many of the critical computations for speech recognition are executed ... aspects of Broca’s area (Brodmann areas 44 and 45) are also regularly implicated in speech processing.
... the range of areas implicated in speech processing go well beyond the classical language areas typically mentioned for speech; the vast majority of textbooks still state that this aspect of perception and language processing occurs in Wernicke’s area (the posterior third of the STG).[9]

Support for a broad range of speech processing areas was furthered by a recent study done at University of Rochester in which American Sign Language native speakers were subject to MRIs while interpreting sentences that identified a relationship using either syntax (relationship is determined by the word order) or inflection (relationship is determined by physical motion of "moving hands through space or signing on one side of the body"). Distinct areas of the brain were activated with the frontal cortex (associated with ability to put information into sequences) being more active in the syntax condition and the temporal lobes (associated with dividing information into its constituent parts) being more active in the inflection condition. However, these areas are not mutually exclusive and show a large amount of overlap. These findings imply that while speech processing is a very complex process, the brain may be using fairly basic, preexisting computational methods.[10]
Arcuate fasciculus
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The arcuate fasciculus (Latin, curved bundle) is the neural pathway connecting the posterior part of the temporoparietal junction with the frontal cortex in the brain and is now considered as part of the Superior longitudinal fasciculus.[citation needed]
Neuroanatomy
While previously thought to connect Wernicke's area and Broca's area, new research demonstrates that the AF instead connects to posterior receptive areas with premotor/motor areas, and not to Broca's area.[1]
The function of the arcuate fasciculus of the nondominant hemisphere is very little studied.[citation needed]
Pathology
Damage to this pathway can cause a form of aphasia known as conduction aphasia, where auditory comprehension and speech articulation are preserved, but people find it difficult to repeat heard speech.
Triangular part of inferior frontal gyrus
From Wikipedia, the free encyclopedia
(Redirected from Pars triangularis)
Jump to: navigation, search
Pars triangularis (pt) is a region of Broca's area in the left inferior frontal gyrus (IFG) of the frontal lobe in the human brain mapping to Brodmann area 45. The inferior frontal sulcus forms the superior boundary, the anterior horizontal ramus provides its inferior boundary, and the caudal boundary is made by the anterior ascending ramus. The pars triangularis contributes to language comprehension, as well as other functions. Some researchers have found leftward asymmetries (larger pars triangularis region in the left hemisphere than the same region in the right hemisphere) of the pt, especially in right-handed individuals and rightward asymmetries in left-handed subjects. These asymmetries are based on volumetric analyses from magnetic resonance imaging (MRI).
Pars triangularis asymmetry and language dominance


Opercular part of the inferior frontal gyrus. Shown in red.
A strong correlation has been found between speech-language and the anatomically asymmetric pt. Foundas, et al. showed that language function can be localized to one region of the brain, as Broca had done before them, but they also supported the idea that one side of the brain is more involved with language than the other. The human brain has two hemispheres, and each one looks similar to the other; that is, it looks like one hemisphere is a mirror image of the other. This is not actually the way it works. Foundas, et al. found that the part of Broca's area we call pars triangularis is actually bigger than the same region in the right side of the brain. Interestingly, this "leftward asymmetry" corresponded both in form and function. This means that the part of the brain that is active during language processing is bigger. In almost all the test subjects, this was the left side. In fact, the only subject tested that had right-hemispheric language dominance was found to have a rightward asymmetry of the pt.[1]
Sulcal variability, stereological measurement and asymmetry of Broca's area on MR images
Certain other researchers, however have found no volumetric asymmetries in the pars triangularis. They have challenged previous findings that pars triangularis asymmetry exists and have suggested that inconsistencies in previous findings may be due to great variability in inter-individual pars triangularis morphology. That is, these regions tend to vary in size and shape much more than other areas of the brain, such as deep cortical nuclei. Furthermore, while these researcher found statistically significant asymmetries in the pars opercularis and the planum temporale, they found no correlations between asymmetries of these brain regions with that of the pars triangularis.[2]
Functional connections within the human inferior frontal gyrus
At least one study demonstrated a high degree of connectivity between the three subregions of the IFG. By stimulating one region of the IFG and measuring the response in distinct regions, these researchers were able to demonstrate the existence of numerous pathways between pars triangularis and pars opercularis. Also, stimulation of one region of the pars triangularis elicited a response in distinct regions of the pars triangularis illustrating the presence of networks within the subgyral region. [3]
Localizing the distributed language network responsible for the N400 measured by MEG during auditory sentence processing
The pars triangularis was implicated in semantic processing of language. By measuring the response of the brain by electroencephalography as it responded to different sentence types (those with or without semantic errors), Maess et al. demonstrated a time-lag in the comprehension of erroneous sentences. To understand this one would only need to imagine a person being told something they did not understand. They would pause and take a moment to process the information. Furthermore, these researchers demonstrated a characteristic processing pattern called an “N400”, which refers to a negativity that appears in the pars triangularis about 400 ms after the syntactic mismatch is presented.[4] However, the pars triangularis is likely to be only part of the network generating the N400 response in EEG since the magnetic counter part N400m measured using MEG has been consistently localized to the superior temporal cortex.[5]
Left ventrolateral prefrontal cortex and the cognitive control of memory
Pars triangularis has been shown to have a role in cognitive control of memory. There are more ways than one to remember something. When a person remembers, (s)he retrieves information from storage in a memory center of the brain. This information may be the muscle contraction sequence for shoe-tying, the face of a loved one, or anything in between. When someone remembers something automatically, without concentrating on it and without trying, it’s called “bottom-up” processing. But sometimes, people really have to struggle to remember something. Imagine a student taking a test and staring at the one question they haven’t answered thinking to themselves, “Come ON! I know this!” That student is concentrating their attention on retrieving the memory. The student is exhibiting cognitive control over their memory. This type of processing is directed, in part, by the ventrolateral prefrontal cortex (VLPFC). Pars triangularis is found in this region. [6]
Dissociating Reading Processes on the Basis of Neuronal Interactions
When reading aloud, people must decode written language to decipher its pronunciation. This processing takes place in Broca’s area. The reader might use previous knowledge of a word in order to correctly vocalize it, or the reader might use knowledge of systematic letter combinations, which represent corresponding phonemes. Scientists can learn about what the brain is doing while people process language by looking at what it does with errors in language. As above, scientists can investigate the extra processing that occurs when people are challenged with a problem. In this case, scientists took advantage of the way pseudo-words and exception words by examining the brain as it interprets these problematic words. When people process language, they use different parts of Broca’s area for different things. Pars triangularis is involved in a specific type of language processing. Specifically, pars triangularis becomes activated when people read exception words, which are words with atypical spelling-to-sound relationships. For example, “have” is an exception word because it is pronounced with a short, which is contrary to grammatical rules of pronunciation. The “e” at the end of the word should lead to the pronunciation of the long “a” sound, as in “cave” or “rave”. Because we are so familiar with the word “have”, we are able to remember its pronunciation, and we don’t have to think through the rules each time we read it. Pars triangularis helps us do that.[7]
Dissociable Controlled Retrival and Generalized Selection Mechanisms in Ventrolateral Prefrontal Cortex
When trying to retrieve information in a top-down fashion, some kind of control mechanism is necessary. Recalling that top-down retrieval depends on conscious control, it is easy to see that there must be some way to exclude irrelevant data from the retrieval. In order to home in on the desired information, some selection must occur. This selection is thought to occur post-retrieval in the mid-VLPFC, which corresponds generally to the location of pt. The theory here is that information is retrieved by certain regions of the left VLPFC, and then it is selected for relevance in another region. This is called the “two part” model of memory retrieval.[8]
Effect of syntactic similarity on cortical activation during second language processing: A comparison of English and Japanese among native Korean trilinguals
Almost every person in the world has learned at least one language. Also, almost everyone that has learned a language has learned it at a young age. Some people are multilingual. Some of these multilingual have learned second or third languages in concert with their first, at a young age, and some have learned other languages in their adulthood. Studies on different subsets of monolinguals and multilinguals have revealed some interesting findings.
By looking at the similarities between the first and second language and what they do to the brain, these researchers found that brain activation looked very different depending on which language the test subjects were processing. They found that pars triangularis activation does not change significantly during processing of these different languages, which is interesting considering the known role of pars triangularis in language.[9]
Cortical activation in the processing of passive sentences in L1 and L2: An Functional magnetic resonance imaging study
The question of how the brain processes primary language versus secondary language is not entirely resolved. This paper showed that comprehension of primary and secondary language occurs in generally the same regions of the brain, even if the secondary language was learned late in life.
There is a difference between the processing patterns of primary and secondary languages in processing of passive sentences. These are sentences using some form of the verb “be” with a verb in the past participle form. For example, “He is ruined” is a passive sentence because the verb “ruin” is in the past participle form and used with “is”, which is a form of the verb “be”. This study shows that processing this sentence, late bilinguals used their pars triangularis much more than their counterparts. This result implies certain things about the way language is learned. It could be, for example, that the reason people often have such difficulty learning foreign languages during adulthood is that their brains are trying to code language information in a region of the brain that is not dedicated to understanding language. Perhaps this is the reason native speakers are able to speak so quickly while their late-bilingual counterparts are forced to stutter as they struggle to process grammatical rules. [10]
Cortical dynamics of word recognition
There is a theory that pars triangularis is especially involved in semantic processing of language, as opposed to phonological processing. That is, pars triangularis is thought to be more involved in deciphering the meaning of words rather than trying to decide what the word is based on the sound that goes into the ear. This study got data that supported this theory. Furthermore, these researchers saw evidence for parallel semantic processing, which occurs when the brain multitasks. When their subjects were undergoing experimentation, they were presented with consonant strings, pseudo-words, and words, and the delay between stimulus and brain activity was about the same for phonological and semantic processing, even though the two seemed to occur in slightly different regions. [11]
Semantic Encoding and Retrieval in the Left Inferior Prefrontal Cortex: A Functional magnetic resonance imaging Study of Task Difficulty and Process Specificity
These researchers found that pars triangularis (as well as some of its neighbors) increased its activity during semantic encoding, regardless of difficulty of the word being processed. This is consistent with the theory that pars triangularis is involved in semantic processing more than phonological processing. Furthermore, they found that these semantic encoding decisions resulted in less involvement of pars triangularis with repetition of the used words. It may seem intuitive that practice would make the brain better at recognizing the words as they reappeared, but there is something else to be learned from this result, as well. That pars triangularis activity went down with repetition also signifies the movement of the task of recognizing the word from the conscious to the passive. This is called repetition priming, and it occurs independent of intention. This idea, when paired with theories about pt’s involvement in conscious retrieval of memory, serves to illustrate the complexity of the brain and its functions. These results together imply the possibility that similar mechanics are required for encoding and retrieving information. Another point of interest was that decreased pars triangularis activation with repetition did not occur with redundant presentation of nonsemantically processed words. [12]
On Broca, brain, and binding: a new framework
pt is highly interconnected with other regions of the brain, especially those in the left frontal language network. Though its function seems to be distinct from its neighbors, this high degree of connectivity supports the idea that language can be integrated into many of the seemingly unrelated thought processes we have. This is not a difficult idea to imagine. For instance, attempting to remember the name of a brand new acquaintance can be challenging, and it often demands the attention of the person doing the remembering. In this example, a person is trying to comprehend sound as a part of language, place the word they just heard in the category “names”, while associating it also as a tag for the face they just saw, simultaneously committing all of these pieces of data to memory. In this view, it hardly seems far-fetched that the roles of pars triangularis in language processing, semantic comprehension, and conscious control of memory are unrelated. In fact, it would be unlikely for pars triangularis not to have multiple roles in the brain, especially considering its high degree of connectivity, both within the left frontal language center, and to other regions. [13]
Abnormal cortical folding patterns within Broca's area in schizophrenia: Evidence from structural MRI
Schizophrenia is a poorly understood disease with complicated symptoms. In an effort to find a cause for this problem, these researchers looked at the brains of schizophrenic patients. It had been shown previously that abnormal gyrification, asymmetry, complexity, and variability occur in patients with schizophrenia. These investigators presented data showing that pt, specifically was highly distorted in schizophrenic patients compared with demographically matched normal subjects. They asserted that Broca’s area is an especially plastic region of the brain in that its morphology can change dramatically from childhood to adulthood. This makes sense when considering the special ability of children to easily learn language, but it also means that the involvement of Broca’s area is limited with respect to memory and recall; children do not seem to be unable to consciously search their memories. Furthermore, investigators took volumetric measurements of the grey and white matter of the brains of their test subjects and compared those measurements to their normal control subjects. They found that schizophrenic patients had dramatically reduced white matter.
As the brain develops, connectivity of different regions changes dramatically. Researchers found that there is a discrepancy in the way white matter and grey matter develop in schizophrenic patients. The tend to have an absence of white matter expansion in schizophrenics.[14]

Tidak ada komentar:

Posting Komentar