首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon (i.e., the similarity relationships among neighbors of the target word measured by clustering coefficient) influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The authors report 3 dual-task experiments concerning the locus of frequency effects in word recognition. In all experiments, Task 1 entailed a simple perceptual choice and Task 2 involved lexical decision. In Experiment 1, an underadditive effect of word frequency arose for spoken words. Experiment 2 also showed underadditivity for visual lexical decision. It was concluded that word frequency exerts an influence prior to any dual-task bottleneck. A related finding in similar dual-task experiments is Task 2 response postponement at short stimulus onset asynchronies. This was explored in Experiment 3, and it was shown that response postponement was equivalent for both spoken and visual word recognition. These results imply that frequency-sensitive processes operate early and automatically. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Two experiments combined masked priming with event-related potential (ERP) recordings to examine effects of primes that are orthographic neighbors of target words. Experiment 1 compared effects of repetition primes with effects of primes that were high-frequency orthographic neighbors of low-frequency targets (e.g., faute-faune [error-wildlife]), and Experiment 2 compared the same word neighbor primes with nonword neighbor primes (e.g., aujel-autel [altar]). Word neighbor primes showed the standard inhibitory priming effect in lexical decision latencies that sharply contrasted with the facilitatory effects of nonword neighbor primes. This contrast was most evident in the ERP signal starting at around 300 ms posttarget onset and continuing through the bulk of the N400 component. In this time window, repetition primes and nonword neighbor primes generated more positive-going waveforms than unrelated primes, whereas word neighbor primes produced null effects. The results are discussed with respect to possible mechanisms of lexical competition during visual word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical... (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the shape of the response time distributions, supporting an early normalization process that is separate from processes influenced by word frequency. In contrast, speeded pronunciation and semantic classification produced interactive influences of word frequency and stimulus quality, which is a fundamental prediction from interactive activation models of lexical processing. These findings suggest that stimulus normalization is specific to lexical decision and is driven by the task's emphasis on familiarity-based information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Lexical activation is a core process in models of spoken word recognition. Specific words activated are candidates, with degree of activation determined by the match with sensory information. Once identified, lexical activation shifts to provide a meaningful representation, normally through activation of semantically related words. Activated words are assumed to acquire familiarity as a result of being activated, providing a basis for memories, both real and imagined. Three experiments showed a direct relationship between number of false recognitions and their presumed degree of activation. Results converge with those from spoken word recognition in implicating lexical activation during early stages of processing. For recognition memory, the message is that prerecognition lexical processing should be included in the memory equation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a system, long words should produce stronger lexical activation than short words, for 2 reasons: Long words provide more bottom-up evidence than short words, and short words are subject to greater inhibition due to the existence of more similar words. Four experiments provide evidence for this view. In addition, reaction-time-based partitioning of the data shows that long words generate greater activation that is available both earlier and for a longer time than is the case for short words. As a result, lexical influences on phoneme identification are extremely robust for long words but are quite fragile and condition-dependent for short words. Models of word recognition must consider words of all lengths to capture the true dynamics of lexical activation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In 2 experiments, a boundary technique was used with parafoveal previews that were identical to a target (e.g., sleet), a word orthographic neighbor (sweet), or an orthographically matched nonword (speet). In Experiment 1, low-frequency words in orthographic pairs were targets, and high-frequency words were previews. In Experiment 2, the roles were reversed. In Experiment 1, neighbor words provided as much preview benefit as identical words and greater benefit than nonwords, whereas in Experiment 2, neighbor words provided no greater preview benefit than nonwords. These results indicate that the frequency of a preview influences the extraction of letter information without setting up appreciable competition between previews and targets. This is consistent with a model of word recognition in which early stages largely depend on excitation of letter information, and competition between lexical candidates becomes important only in later stages. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
In 3 studies, the authors explore how repeated exposure to a spoken word affects memory for perceptual attributes associated with the word (such as a talker's voice or a word's plurality). Subjects heard a list of words; particular words were repeated differing numbers of times. At test, subjects estimated the frequency of each word, with instructions to give frequency judgments of "zero" to words with changed attributes. The experiments demonstrate that memory for perceptual attributes improves very little after the first few repetitions, although word memory continues to improve. The experiments extend the registration without learning effect (D. L. Hintzman, T. Curran, and B. Oppy, 1992) to auditory words, to complex attributes (voice), and to conditions of low and high stimulus variability (two or many voices). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Three experiments examined whether the identification of a visual word is followed by its subvocal articulation during reading. An irrelevant spoken word (ISW) that was identical, phonologically similar, or dissimilar to a visual target word was presented when the eyes moved to the target in the course of sentence reading. Sentence reading was further accompanied by either a sequential finger tapping task (Experiment 1) or an articulatory suppression task (Experiment 2). Experiment 1 revealed sound-specific interference from a phonologically similar ISW during posttarget viewing. This interference was absent in Experiment 2, where similar and dissimilar ISWs impeded target and posttarget reading equally. Experiment 3 showed that articulatory suppression left the lexical processing of visual words intact and that it did not diminish the influence of visual word recognition on eye guidance. The presence of sound-specific interference during posttarget reading in Experiment 1 is attributed to deleterious effects of a phonologically similar ISW on the subvocal articulation of a target. Its absence in Experiment 2 is attributed to the suppression of a target’s subvocal articulation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Recent studies have found that masked word primes that are orthographic neighbors of the target inhibit lexical decision latencies (Davis & Lupker, 2006; Nakayama, Sears, & Lupker, 2008), consistent with the predictions of lexical competition models of visual word identification (e.g., Grainger & Jacobs, 1996). In contrast, using the fast priming paradigm (Sereno & Rayner, 1992), orthographically similar primes produced facilitation in a reading task (H. Lee, Rayner, & Pollatsek, 1999; Y. Lee, Binder, Kim, Pollatsek, & Rayner, 1999). Experiment 1 replicated this facilitation effect using orthographic neighbor primes. In Experiment 2, neighbor primes and targets were presented in different cases (e.g., SIDE–tide); in this situation, the facilitation effect disappeared. However, nonword neighbor primes (e.g., KIDE–tide) still significantly facilitated reading of targets (Experiment 3). Taken together, these results suggest that it is possible to explain the priming effects from word neighbor primes in fast priming experiments in terms of the interactions between the inhibitory and facilitory processes embodied in lexical competition models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct. The correct Figure 10 is provided. (The following abstract of the original article appeared in record 2008-11850-014.) Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
We studied the processing of two word strings in French made up of a determiner and a noun which contains a schwa (mute e). Depending on the noun, schwa deletion is present, optional or absent. In a production study, we show that schwa deletion, and the category of the noun, have a large impact on the duration of the strings. We take this into account in two perception studies, which use word repetition and lexical decision, and which show that words in which the schwa has been deleted usually take longer to recognize than words that retain the schwa, but that this depends also on the category of the word. We explain these results by examining the influence of orthography. Based on the model proposed by Grainger and Ferrand (1996), which integrates the written dimension, we suggest that two sources of information, phonological and orthographic, interact during spoken word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Event-related potentials elicited by semantically associated and unassociated word pairs embedded in congruous and semantically anomalous spoken sentences were recorded from patients with Alzheimer's disease (AD) and healthy older and young controls as a means of examining the nature, time course, and relation between word and sentence context effects. All groups demonstrated lexical priming in nonsensical sentences, but it was earlier in the young (200-600 ms) than in the older controls (600-800 ms), and even later in the probable AD patients (800-1,000 ms). Moreover, processing in both the elderly and AD groups benefited disproportionately from a meaningful sentence context. The results do not accord well with either a strictly structural or a strictly functional account of the semantic impairments in AD. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
When a listener hears a word (beef), current theories of spoken word recognition posit the activation of both lexical (beef) and sublexical (/b/, /i/, /f/) representations. No lexical representation can be settled on for an unfamiliar utterance (peef). The authors examined the perception of nonwords (peef) as a function of words or nonwords heard 10-20 min earlier. In lexical decision, nonword recognition responses were delayed if a similar word had been heard earlier. In contrast, nonword processing was facilitated by the earlier presentation of a similar nonword (baff-paff). This pattern was observed for both word-initial (beef-peef), and word-final (job-jop) deviation. With the word-in-noise task, real word primes (beef) increased real word intrusions for the target nonword (peef), but only consonant-vowel (CV) or vowel-consonant (VC) intrusions were increased with similar pseudoword primes (baff-paff). The results across tasks and experiments support both a lexical neighborhood view of activation and sublexical representations based on chunks larger than individual phonemes (CV or VC sequences). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Recent debates on lexical ambiguity resolution have centered on the subordinate-bias effect, in which reading time is longer on a biased ambiguous word in a subordinate-biasing context than on a control word. The nature of the control word--namely, whether it matched the frequency of the ambiguous word's overall word form or its contextually instantiated word meaning (a higher or lower frequency word, respectively)--was examined. In addition, contexts that were singularly supportive of the ambiguous word's subordinate meaning were used. Eye movements were recorded as participants read contextually biasing passages that contained an ambiguous word target or a word-form or word-meaning control. A comparison of fixation times on the 2 control words revealed a significant effect of word frequency. Fixation times on the ambiguous word generally fell between those on the 2 controls and were significantly different than both. Results are discussed in relation to the reordered access model, in which both meaning frequency and prior context affect access procedures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Word frequency and orthographic familiarity were independently manipulated as readers' eye movements were recorded. Word frequency influenced fixation durations and the probability of word skipping when orthographic familiarity was controlled. These results indicate that lexical processing of words can influence saccade programming (as shown by fixation durations and which words are fixated). Orthographic familiarity, but not word frequency, influenced the duration of prior fixations. These results provide evidence for orthographic, but not lexical, parafoveal-on-foveal effects. Overall, the findings have a crucial implication for models of eye movement control in reading: There must be sufficient time for lexical factors to influence saccade programming before saccade metrics and timing are finalized. The conclusions are critical for the fundamental architecture of models of eye movement control in reading- namely, how to reconcile long saccade programming times and complex linguistic influences on saccades during reading. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Four experiments examined effects of lexical stress on lexical access for recently learned words. Participants learned artificial lexicons (48 words) containing phonologically similar items and were tested on their knowledge in a 4-alternative forced-choice (4AFC) referent-selection task. Lexical stress differences did not reduce confusions between cohort items: KAdazu and kaDAzeI were confused with one another in a 4AFC task and in gaze fixations as often as BOsapeI and BOsapaI. However, lexical stress did affect the relative likelihood of stress-initial confusions when words were embedded in running nonsense speech. Words with medial stress, regardless of initial vowel quality, were more prone to confusions than words with initial stress. The authors concluded that noninitial stress, particularly when word segmentation is difficult, may serve as "noise" that alters lexical learning and lexical access. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号