首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Previous research has demonstrated that particular facial expressions more readily acquire excitatory strength when paired with a congruent unconditioned stimulus than when paired with an incongruent outcome. The present study with a total of 36 undergraduates extends these findings on the excitatory inhibitory role of facial expressions by demonstrating that particular facial expressions (fear and happy), when paired with a neutral cue (tone), can influence conditioning to the neutral conditioned stimulus (CS). Ss who had a fear expression paired with the neutral CS responded more to the fear expression than to the neutral CS, whereas Ss who had a happy expression paired with the neutral CS responded more to the neutral cue than to the happy expression. These findings strongly support predictions from "overshadowing" or "blocking" models of classical conditioning. (12 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
This research examined the relationship between facial immaturity and the perception of youthfulness, helplessness, and cuteness. In the first study, college students rated 16 faces for youthfulness. Faces varied within four dimensions (eye position, eye size, nose length, and shape of chin) representing either a mature or immature feature. College students rated faces conserving immature features as more youthful than those without those features. In the second study, three groups of children (5 to 8, 9 to 12, and 13 to 16 years old) rated the same 16 faces with respect to cuteness, helplessness, and youthfulness. Children were similar with respect to their attention to immature features when evaluating faces for youthful qualities, although older children were more sensitive to eye position than younger children when rating faces for youthfulness and helplessness. Older children were more consistent in their attention to immature features when rating faces.  相似文献   

3.
Reports an error in "Facial expressions of emotion influence memory for facial identity in an automatic way" by Arnaud D'Argembeau and Martial Van der Linden (Emotion, 2007[Aug], Vol 7[3], 507-515). The image printed for Figure 3 was incorrect. The correct image is provided in the erratum. (The following abstract of the original article appeared in record 2007-11660-005.) Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
J. B. Halberstadt and P. M. Niedenthal (2001) reported that explanations of target individuals' emotional states biased memory for their facial expressions in the direction of the explanation. The researchers argued for, but did not test, a 2-stage model of the explanation effect, such that verbal explanation increases attention to facial features at the expense of higher level featural configuration, making the faces vulnerable to conceptual reintegration in terms of available emotion categories. The current 4 experiments provided convergent evidence for the "featural shift" hypothesis by examining memory for both faces and facial features following verbal explanation. Featural attention was evidenced by verbalizers' better memory for features relative to control participants and reintegration by a weaker explanation bias for features and configurally altered faces than for whole, unaltered faces. The results have implications for emotion, attribution, language, and the interaction of implicit and explicit processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Presented pictures of parents and strangers to 70 9–24 mo old infants. Results indicate that (a) Ss smiled more often and looked longer at pictures of parents than at those of strangers; (b) the smiling effect was related to age, with older Ss being more likely to smile at familiar than at unfamiliar faces; (c) younger Ss were more likely to smile at the strange woman than at the strange man; and (d) Ss did not differentially respond to pictures of their mother and father. (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
[Correction Notice: An erratum for this article was reported in Vol 7(4) of Emotion (see record 2007-17748-022). The image printed for Figure 3 was incorrect. The correct image is provided in the erratum.] Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Studied the accuracy with which 48 Ss (mean ages = 5.5, 7.4, 9.5, and 20.5 yrs) could encode cues commonly found in social interactions (e.g., facial expression, vocal intonation, and movements). Data suggest that younger Ss perceived many everyday social interactions as essentially identical and responded accordingly. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Perception of faces by budgerigars was studied with computerized images modeled after natural faces. Individual facial characteristics were varied with all others held constant; then relative importance among several features was determined by varying each within a single experiment. Characteristics with the potential to signal important biological information (e.g., age or sex) were perceptually salient, whereas characteristics that vary among faces but have limited potential to signal important information were not. Model faces were also presented in a normal or an altered configuration. Birds discriminated among faces in a normal configuration more easily than among models with an altered configuration even when the facial features on which the discrimination was based differed in the same way; this suggests that configurational cues play an important role in face perception by budgerigars. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Videotaped the naturally occurring classroom behaviors of 33 preschool children 51–63 mo old. Instances of prosocial, defensive, and social behaviors were coded, as well as peer and teacher reactions to prosocial behaviors. Although teachers responded positively to Ss' prosocial behaviors only a small percentage of the time, peers reacted positively a moderate proportion of the time. Ss who frequently responded to requests for prosocial behavior received fewer positive reactions from peers than Ss who complied with requests less often. In contrast, teachers were more likely to react positively to girls who exhibited high levels of "asked for" (compliant) prosocial behaviors. The type of reactions an S received for prosocial behaviors was related both to the type of reactions given to others' prosocial behaviors and to positive sociability. Frequent performance of spontaneous prosocial actions was related to a different pattern of behaviors than was frequency of prosocial behaviors in response to a request. (21 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Fundamental to face processing is the ability to encode information about the spatial relations among facial features (configural information). Using a bizarreness rating paradigm, we found older adults differed from young adults in rating configurally distorted faces (eyes, mouth inverted) as less bizarre across all tested orientations (0° to 180°), and were more vulnerable to orientation effects when faces were rotated beyond 90°. No age-related differences in perception of either unaltered faces or featurally distorted faces (eyes whitened, teeth blackened) occurred. These findings identify changes in sensitivity to configural information as an important factor in age-related differences in face perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Investigated the efficiency of 2 processing strategies on 2 aspects of facial recognition with 36 undergraduates. Ss viewed each target face for 3 sec and then rated either the likeability of each of a series of photographic faces or decided which was the most distinctive physical feature. Each face was presented in its own unique environmental context. Two tests of retention were given: old–new recognition test based on the faces minus their context, and a test of the Ss' ability to recall the context in which each target face had appeared. Results show that there was no difference between the processing conditions on old–new recognition, but there was a substantial difference in context recall, with the likeability judgment producing much better performance. (French abstract) (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
We examined the capacity of the cerebral hemispheres to process faces that deviate from canonical perspective. In Experiment 1, normal Ss performed a gender categorization of faces presented at varying angular orientations in the left visual field (LVF) or right visual field (RVF). Orientation affected processing speed, more so in the RVF than in the LVF. The function relating reaction times to disorientation of the faces was approximately monotonic and reflected the increased difficulty in extracting relevant configurational information as the faces were rotated from canonical perspective. In Experiment 2, 3 commissurotomized Ss performed the same task. They responded above chance in the 2 visual fields, and the pattern of their results was similar to that obtained with the normal Ss, but the effect of disorientation was considerably more pronounced. It is suggested that the right hemisphere contribution becomes more critical the further the visual pattern departs from conventional view. Issues regarding the specification of processes correcting for disorientation and comparison of normal and commissurotomized Ss are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
14.
Assessed the development in approximately 160 6–16 yr old Ss of the ability to encode unfamiliar faces. Performance improved markedly between ages 6 and 10 and then remained at a fixed level or actually declined for several years, finally improving again by age 16. Evidence is provided that this distinctive developmental course reflects, in part, acquisition of processes specific to the encoding of faces rather than general pattern encoding or metamemorial skills. The possibility that maturational factors contribute to the developmental course of face recognition is raised, and 2 sources of data relevant to assessing this possibility are discussed. (41 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial “conjunctions” that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new difference in false-alarm errors. The pattern of loadings on both components was impressively invariant across the experiments, which differed in age range of participants, stimulus set, list length, facial orientation, and the presence versus absence of familiarized lures along with conjunction and entirely new lures in the recognition test. Taken together, the findings show that neither component was exclusively related to discrimination, criterion, configural processing, featural processing, context recollection, or familiarity. Rather, the data are consistent with a neuropsychological model that distinguishes frontal and occipitotemporal contributions to face recognition memory. Within the framework of the model, findings showed that frontal and occipitotemporal contributions are discernible from the pattern of individual differences in behavioral performance among healthy young adults. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The current study investigated the influence of a low-level local feature (curvature) and a high-level emergent feature (facial expression) on rapid search. These features distinguished the target from the distractors and were presented either alone or together. Stimuli were triplets of up and down arcs organized to form meaningless patterns or schematic faces. In the feature search, the target had the only down arc in the display. In the conjunction search, the target was a unique combination of up and down arcs. When triplets depicted faces, the target was also the only smiling face among frowning faces. The face-level feature facilitated the conjunction search but, surprisingly, slowed the feature search. These results demonstrated that an object inferiority effect could occur even when the emergent feature was useful in the search. Rapid search processes appear to operate only on high-level representations even when low-level features would be more efficient. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Assessed the effects of uniform and religious status of interviewers on male and female Catholic and non-Catholic interviewees. 128 16–21 yr old Ss had interviews with a nun dressed either in lay clothing or a habit or with a non-nun dressed either in lay clothing or a habit. The design was a 4?×?2?×?2 factorial. The main effects were the 4 interviewer combinations (Religious Status?×?Dress), sex of interviewee, and religious preference of interviewee. The dependent variables were length of interview and scores on an attitude measure, an experience scale, and an interviewer rating scale. ANOVAs revealed significant main effect differences in (a) length of interview (interviewees spent more time speaking to nuns dressed in a habit) and (b) interviewee attitude (female interviewees responded more conservatively than males, Catholics responded more conservatively than non-Catholics, and all Ss responded more conservatively to nuns than non-nuns). (4 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
In an experiment with 20 undergraduates, video recordings of actors' faces covered with black makeup and white spots were played back to the Ss so that only the white spots were visible. The results demonstrate that moving displays of happiness, sadness, fear, surprise, anger, and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicates that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of the 6 emotions was also investigated using normally illuminated and spots-only displays. In both instances, the results indicate that different facial regions are more informative for different emotions. (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
In previous research, the emotions associated with repressors' memorial representations were found to be more discrete than those associated with nonrepressors'. In each of the 3 experiments reported here, repressive discreteness was apparent in repressors' appraisals of emotional stimuli at the time they were encoded. In 1 experiment, Ss appraised individual facial expressions of emotion. Repressors judged the dominant emotions in these faces as no less intense than did nonrepressors, but they appraised the blend of nondominant emotions as less intense than did nonrepressors. In the remaining 2 experiments, Ss appraised crowds of emotional faces as well as crowds of geometric shapes. In both crowd experiments, the repressive discreteness was evident in appraisals of crowds of emotional faces but not in appraisals of crowds of geometric shapes. The repressive discreteness effect did not appear to reflect a general repressor–nonrepressor difference in the appraisal of stimulus features. Rather, the results suggested that repressive discreteness may be constrained to appraisals of emotions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Empirical evidence shows an effect of gaze direction on cueing spatial attention, regardless of the emotional expression shown by a face, whereas a combined effect of gaze direction and facial expression has been observed on individuals' evaluative judgments. In 2 experiments, the authors investigated whether gaze direction and facial expression affect spatial attention depending upon the presence of an evaluative goal. Disgusted, fearful, happy, or neutral faces gazing left or right were followed by positive or negative target words presented either at the spatial location looked at by the face or at the opposite spatial location. Participants responded to target words based on affective valence (i.e., positive/negative) in Experiment 1 and on letter case (lowercase/uppercase) in Experiment 2. Results showed that participants responded much faster to targets presented at the spatial location looked at by disgusted or fearful faces but only in Experiment 1, when an evaluative task was used. The present findings clearly show that negative facial expressions enhance the attentional shifts due to eye-gaze direction, provided that there was an explicit evaluative goal present. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号