首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
[Correction Notice: An erratum for this article was reported in Vol 32(2) of Journal of Experimental Psychology: Learning, Memory, and Cognition (see record 2007-16796-001). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control."] Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

3.
Two visual-world experiments evaluated the time course and use of orthographic information in spoken-word recognition using printed words as referents. Participants saw 4 words on a computer screen and listened to spoken sentences instructing them to click on one of the words (e.g., Click on the word bead). The printed words appeared 200 ms before the onset of the spoken target word. In Experiment 1, the display included the target word and a competitor with either a lower degree (e.g., bear) or a higher degree (e.g., bean) of phonological overlap with the target. Both competitors had the same degree of orthographic overlap with the target. There were more fixations to the competitors than to unrelated distractors. Crucially, the likelihood of fixating a competitor did not vary as a function of the amount of phonological overlap between target and competitor. In Experiment 2, the display included the target word and a competitor with either a lower degree (e.g., bare) or a higher degree (e.g., bear) of orthographic overlap with the target. Competitors were homophonous and thus had the same degree of phonological overlap with the target. There were more fixations to higher overlap competitors than to lower overlap competitors, beginning during the temporal interval where initial fixations driven by the vowel are expected to occur. The authors conclude that orthographic information is rapidly activated as a spoken word unfolds and is immediately used in mapping spoken words onto potential printed referents. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In this article the operation of a direct visual route to action in response to objects, in addition to a semantically mediated route, is demonstrated. Four experiments were conducted in which participants made gesturing or naming responses to pictures under deadline conditions. There was a cross-over interaction in the number of visual errors relative to the number of semantic plus semantic-visual errors in the two tasks: In gesturing, compared with naming, participants made higher proportions of visual errors and lower proportions of semantic plus semantic-visual errors (Experiments 1, 3, and 4). These results suggest that naming and gesturing are dependent on separate information-processing routes from stimulus to response, with gesturing dependent on a visual route in addition to a semantic route. Partial activation of competing responses from the visual information present in objects (mediated by the visual route to action) leads to high proportions of visual errors under deadline conditions. Also, visual errors do not occur when gestures are made in response to words under a deadline (Experiment 2), which indicates that the visual route is specific to seen objects.  相似文献   

5.
Three experiments examine whether spatial attention and visual word recognition processes operate independently or interactively in a spatially cued lexical-decision task. Participants responded to target strings that had been preceded first by a prime word at fixation and then by an abrupt onset cue either above or below fixation. Targets appeared either in the cued (i.e., valid) or uncued (i.e., invalid) location. The proportion of validly cued trials and the proportion of semantically related prime-target pairs were manipulated independently. It is concluded that spatial attention and visual word recognition processes are best seen as interactive. Spatial attention affects word recognition in 2 distinct ways: (a) it affects the uptake of orthographic information, possibly acting as "glue" to hold letters in their proper places in words, and (b) it (partly) determines whether or not activation from the semantic level feeds down to the lexical level during word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In 2 experiments involving patients with semantic dementia, the authors investigated the impact of semantic memory loss on both true and false recognition. Experiment 1 involved recognition memory for categories of everyday objects that shared a predominantly semantic relationship. The patients showed preserved item-specific recollection for the pictorial stimuli but, compared with control participants, exhibited significantly reduced utilization of gist information regarding the categories of objects. The latter result is consistent with the patients' degraded semantic knowledge. Experiment 2 involved categories of abstract objects that were related to one another perceptually rather than semantically. Patients with semantic dementia obtained item-specific recollection and gist memory scores that were indistinguishable from those of control participants. These results suggest that the reduction in gist memory in semantic dementia is largely specific to semantic representations and cannot be attributed to general difficulty with abstracting and/or utilizing gistlike commonalities between stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In 4 cross-modal naming experiments, researchers investigated the role of sentence constraint in natural language comprehension. On the sentence constraint account, incoming linguistic material activates semantic features that in turn pre-activate likely upcoming words. The 1st and 2nd experiments investigated whether stimulus offset asynchrony played a critical role in previous studies supporting the sentence constraint account. The 3rd and 4th experiments examined further predictions of the sentence constraint account, in particular whether pre-activated words would compete for activation. In Experiment 3, the researchers manipulated whether an expected target word had a close competitor and found that response to the expected word was facilitated regardless of the proximity of a competitor. The 4th experiment established that close competitors were primed by the sentence frames and should have been available to compete with expected target words. Thus, word-level representations did not compete for activation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In 3 experiments, we investigated the effect of grammatical gender on object categorization. Participants were asked to judge whether 2 objects, whose names did or did not share grammatical gender, belonged to the same semantic category by pressing a key. Monolingual speakers of English (Experiment 1), Italian (Experiments 1 and 2), and Spanish (Experiments 2 and 3) were tested in their native language. Italian and Spanish participants responded faster to pairs of stimuli sharing the same gender, whereas no difference was observed for English participants. In Experiment 2, the pictures were chosen in such a way that the grammatical gender of the names was opposite in Italian and Spanish. Therefore, the same pair of stimuli gave rise to different patterns depending on the gender congruency of the names in the languages. In Experiment 3, Spanish speakers performed the same task under an articulatory suppression condition, showing no grammatical gender effect. The locus where meaning and gender interact can be located at the level of the lexical representation that specifies syntactic information: Nouns sharing the same grammatical gender activate each other, thus facilitating their processing and speeding up responses, either to semantically related pairs or to semantically unrelated pairs. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The authors report 3 dual-task experiments concerning the locus of frequency effects in word recognition. In all experiments, Task 1 entailed a simple perceptual choice and Task 2 involved lexical decision. In Experiment 1, an underadditive effect of word frequency arose for spoken words. Experiment 2 also showed underadditivity for visual lexical decision. It was concluded that word frequency exerts an influence prior to any dual-task bottleneck. A related finding in similar dual-task experiments is Task 2 response postponement at short stimulus onset asynchronies. This was explored in Experiment 3, and it was shown that response postponement was equivalent for both spoken and visual word recognition. These results imply that frequency-sensitive processes operate early and automatically. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Research has illustrated dissociations between "cognitive" and "action" systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The role of assembled phonology in visual word recognition was investigated using a task in which participants judged whether 2 words (e.g., PILLOW–BEAD) were semantically related. Of primary interest was whether it would be more difficult to respond "no" to "false homophones" (e.g., BEAD) of words (BED) that are semantically related to target words than to orthographic controls (BEND). (BEAD is a false homophone of BED because –EAD can be pronounced /εd/.) In Experiment 1, there was an interference effect in the response time data, but not in the error data. These results were replicated in a 2nd experiment in which a parafoveal preview was provided for the 2nd word of the pair. A 3rd experiment ruled out explanations of the false homophone effect in terms of inconsistency in spelling-to-sound mappings or inadequate spelling knowledge. It is argued that assembled phonological representations activate meaning in visual word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Approaches to spoken word recognition differ in the importance they assign to word onsets during lexical access. This research contrasted the hypothesis that lexical access is strongly directional with the hypothesis that word onsets are less important than the overall goodness of fit between input and lexical form. A cross-modal priming technique was used to investigate the extent to which a rhyme prime (a prime that differs only in its first segment from the word that is semantically associated with the visual probe) is as effective a prime as the original word itself. Earlier research had shown that partial primes that matched from word onset were very effective cross-modal primes. The present results show that, irrespective of whether the rhyme prime was a real word or not, and irrespective of the amount of overlap between the rhyme prime and the original word, the rhymes are much less effective primes than the full word. In fact, no overall priming effect could be detected at all except under conditions in which the competitor environment was very sparse. This suggests that word onsets do have a special status in the lexical access of spoken words. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The influence of color as a surface feature versus its influence as stored knowledge in object recognition was assessed. Participants decided whether a briefly presented and masked picture matched a test name. For pictures and words referring to similarly shaped objects, semantic color similarity (SCS) was present when picture and word shared the same prototypical color (e.g., purple apple followed by cherry). Perceptual color similarity (PCS) was present when the surface color of the picture matched the prototypical color of the named object (e.g., purple apple followed by blueberry). Response interference was primarily due to SCS, despite the fact that participants based similarity ratings on PCS. When uncolored objects were used, SCS interference still occurred, implying that the influence of SCS did not depend on the presence of surface color. The results indicate that, relative to surface color, stored color knowledge was more influential in object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
An event-related brain potential experiment was carried out to investigate the temporal relationship between lexical selection and the semantic integration in auditory sentence processing. Participants were presented with spoken sentences that ended with a word that was either semantically congruent or anomalous. Information about the moment in which a sentence-final word could uniquely be identified, its isolation point (IP), was compared with the onset of the elicited N400 congruity effect, reflecting semantic integration processing. The results revealed that the onset of the N400 effect occurred prior to the IP of the sentence-final words. Moreover, the factor early or late IP did not affect the onset of the N400. These findings indicate that lexical selection and semantic integration are cascading processes, in that semantic integration processing can start before the acoustic information allows the selection of a unique candidate and seems to be attempted in parallel for multiple candidates that are still compatible with the bottom-up acoustic input. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Four experiments investigated activation of semantic information in action preparation. Participants either prepared to grasp and use an object (e.g., to drink from a cup) or to lift a finger in association with the object's position following a go/no-go lexical-decision task. Word stimuli were consistent to the action goals of the object use (Experiment 1) or to the finger lifting (Experiment 2). Movement onset times yielded a double dissociation of consistency effects between action preparation and word processing. This effect was also present for semantic categorizations (Experiment 3), but disappeared when introducing a letter identification task (Experiment 4). In sum, our findings indicate that action semantics are activated selectively in accordance with the specific action intention of an actor. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
In 3 picture–word interference experiments, speakers named a target object in the presence of an unrelated not-to-be-named context object. Distractor words, which were phonologically related or unrelated to the context object's name, were used to determine whether the context object had become phonologically activated. All objects had high frequency names, and the ease of processing of these objects was manipulated by a visual degradation technique. In Experiment 1, both objects were nondegraded; in Experiment 2, both objects were degraded; and in Experiment 3, either the target object or the context object was degraded. Distractor words, which were phonologically related to the context objects, interfered with the naming response when both objects were nondegraded, indicating that the context objects had become phonologically coactivated. The effect vanished when both objects were degraded, when only the context object was degraded, and when only the target object was degraded. These data demonstrate that the amount of available processing resources constrains the forward cascading of activation in the conceptual-lexical system. Context objects are likely to become phonologically coactivated if they are easily retrieved and if prioritized target processing leaves sufficient resources. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., Marie va décrire la poule [Marie will describe the chicken]). Displays depicted several objects including the final noun target (chicken) and an interlingual near-homophone (e.g., pool) whose name in English is phonologically similar to the French target (poule). Listeners’ eye movements reflected temporary consideration of the interlingual competitor when hearing the target noun, demonstrating cross-language lexical competition. However, competitor fixations were dramatically reduced when prior sentence information was incompatible with the competitor (e.g., Marie va nourrir… [Marie will feed…]). In contrast, interlingual competition from English did not vary according to participants’ rated proficiency in French, even though proficiency reliably predicted other aspects of processing behavior, suggesting higher proficiency in the active language does not provide a significant independent source of control over interlingual competition. The results provide new insights into the nature of parallel language activation in naturalistic sentential contexts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct. The correct Figure 10 is provided. (The following abstract of the original article appeared in record 2008-11850-014.) Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号