首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.  相似文献   

2.
Listeners are able to extract important linguistic information by viewing the talker's face-a process known as 'speechreading.' Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that f luent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speechreading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences ref lect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.  相似文献   

3.
Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area.  相似文献   

4.
N E Crone  L Hao  J Hart  D Boatman  R P Lesser  R Irizarry  B Gordon 《Neurology》2001,57(11):2045-2053
OBJECTIVE: To investigate the functional-neuroanatomic substrates of word production using signed versus spoken language. METHODS: The authors studied single-word processing with varying input and output modalities in a 38-year-old woman with normal hearing and speech who had become proficient in sign language 8 years before developing intractable epilepsy. Subdural electrocorticography (ECoG) was performed during picture naming and word reading (visual inputs) and word repetition (auditory inputs); these tasks were repeated with speech and with sign language responses. Cortical activation was indexed by event-related power augmentation in the 80- to 100-Hz gamma band, and was compared with general principles of functional anatomy and with subject-specific maps of the same or similar tasks using electrical cortical stimulation (ECS). RESULTS: Speech outputs activated tongue regions of the sensorimotor cortex, and sign outputs activated hand regions. In addition, signed word production activated parietal regions that were not activated by spoken word production. Posterior superior temporal gyrus was activated earliest and to the greatest extent during auditory word repetition, and the basal temporal-occipital cortex was activated similarly during naming and reading, reflecting the different modalities of input processing. With few exceptions, topographic patterns of ECoG gamma were consistent with ECS maps of the same or similar language tasks. CONCLUSIONS: Spoken and signed word production activated many of the same cortical regions, particularly those processing auditory and visual inputs; however, they activated different regions of sensorimotor cortex, and signing activated parietal cortex more than did speech. This study illustrates the utility of electrocorticographic gamma for studying the neuroanatomy and processing dynamics of human language.  相似文献   

5.
Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present study explored the extent to which this circuitry was activated when seen speech was deprived of its time-varying characteristics. In the scanner, hearing participants were instructed to look for a prespecified visible speech target sequence ("voo" or "ahv") among other monosyllables. In one condition, the image sequence comprised a series of stilled key frames showing apical gestures (e.g., separate frames for "v" and "oo" [from the target] or "ee" and "m" [i.e., from nontarget syllables]). In the other condition, natural speech movement of the same overall segment duration was seen. In contrast to a baseline condition in which the letter "V" was superimposed on a resting face, stilled speech face images generated activation in posterior cortical regions associated with the perception of biological movement, despite the lack of apparent movement in the speech image sequence. Activation was also detected in traditional speech-processing regions including the left inferior frontal (Broca's) area, left superior temporal sulcus (STS), and left supramarginal gyrus (the dorsal aspect of Wernicke's area). Stilled speech sequences also generated activation in the ventral premotor cortex and anterior inferior parietal sulcus bilaterally. Moving faces generated significantly greater cortical activation than stilled face sequences, and in similar regions. However, a number of differences between stilled and moving speech were also observed. In the visual cortex, stilled faces generated relatively more activation in primary visual regions (V1/V2), while visual movement areas (V5/MT+) were activated to a greater extent by moving faces. Cortical regions activated more by naturally moving speaking faces included the auditory cortex (Brodmann's Areas 41/42; lateral parts of Heschl's gyrus) and the left STS and inferior frontal gyrus. Seen speech with normal time-varying characteristics appears to have preferential access to "purely" auditory processing regions specialized for language, possibly via acquired dynamic audiovisual integration mechanisms in STS. When seen speech lacks natural time-varying characteristics, access to speech-processing systems in the left temporal lobe may be achieved predominantly via action-based speech representations, realized in the ventral premotor cortex.  相似文献   

6.
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.  相似文献   

7.
Speechreading circuits in people born deaf   总被引:2,自引:0,他引:2  
In hearing people, silent speechreading generates bilateral activation in superior temporal regions specialised for the perception of auditory speech [Science 276 (1997) 593; Neuroreport 11 (2000) 1729; Proceedings of the Royal Society London B 268 (2001) 451]. In the present study, FMRI data were collected from deaf and hearing volunteers while they speechread numbers and during a control task in which they counted nonsense mouth movements (gurns). Brain activation for silent speechreading in oral deaf participants was found primarily in posterior cingulate cortex and hippocampal/lingual gyri. In contrast to the pattern observed in the hearing group, deaf participants showed no speechreading-specific activation in left lateral temporal regions.These data suggest that acoustic experience shapes the functional circuits for analysing speech. We speculate on the functional role, the posterior cingulate gyrus may play in speechreading by profoundly congenitally deaf people.  相似文献   

8.
Cross-modal plasticity in deaf subjects is still discussed controversial. We tried to figure out whether the plasticity is dependent on the extent of hearing loss. Three groups of volunteers, comprising twelve individuals each, were investigated. They were characterized by three distinctive features, one had normal hearing, the other one lost hearing and the third had only minimal residual hearing ability. All participants, except those of group one, were capable of using German Sign Language (GSL). The groups were studied with functional MRI in a standard block design during individuals' watching sign language videos alternating with black frame. During sign language conditions, deaf subjects revealed a significant activation of the auditory cortex in both hemispheres comprising Brodmann areas (BA) 42 and 22 corresponding to the secondary associative auditory areas. Additionally, activation of the angular and supramarginal gyrus was seen. Activation of the primary auditory cortex was revealed in deaf subjects with total hearing loss during sign language tasks but not in subjects with residual hearing ability. In conclusion our results indicate a cortical reorganization of the auditory cortex comprising primary auditory fields only present in subjects with total hearing loss.  相似文献   

9.
This fMRI study explored the functional neural organisation of seen speech in congenitally deaf native signers and hearing non-signers. Both groups showed extensive activation in perisylvian regions for speechreading words compared to viewing the model at rest. In contrast to earlier findings, activation in left middle and posterior portions of superior temporal cortex, including regions within the lateral sulcus and the superior and middle temporal gyri, was greater for deaf than hearing participants. This activation pattern survived covarying for speechreading skill, which was better in deaf than hearing participants. Furthermore, correlational analysis showed that regions of activation related to speechreading skill varied with the hearing status of the observers. Deaf participants showed a positive correlation between speechreading skill and activation in the middle/posterior superior temporal cortex. In hearing participants, however, more posterior and inferior temporal activation (including fusiform and lingual gyri) was positively correlated with speechreading skill. Together, these findings indicate that activation in the left superior temporal regions for silent speechreading can be modulated by both hearing status and speechreading skill.  相似文献   

10.
In the past, researchers investigated silent lipreading in normal hearing subjects with functional neuroimaging tools and showed how the brain processes visual stimuli that are normally accompanied by an auditory counterpart. Previously, we showed activation differences between males and females in primary auditory cortex during silent lipreading, i.e. only the female group significantly activated the primary auditory region during lipreading. Here we report and discuss the overall activation pattern in males and females. We used positron emission tomography to study silent lipreading in 19 normal hearing subjects (nine females). Prior to scanning, subjects were tested on their lipreading ability and only good lipreaders were included in the study. Silent lipreading was compared with a static image. In the whole group, activations were found mainly in the left hemisphere with major clusters in superior temporal, inferior parietal, inferior frontal and precentral regions. The female group showed more clusters and these clusters were larger than in the male group. Sex differences were found mainly in right inferior frontal and left inferior parietal regions and to a lesser extent in bilateral angular and precentral gyri. The sex differences in the parietal multimodal region support our previous hypothesis that the male and female brain process visual speech stimuli differently without differences in overt lipreading ability. Specifically, females associate the visual speech image with the corresponding auditory speech sound whereas males focus more on the visual image itself.  相似文献   

11.
Visual stimuli activate auditory cortex in deaf subjects: evidence from MEG   总被引:5,自引:0,他引:5  
Studies using fMRI have demonstrated that visual stimuli activate auditory cortex in deaf subjects. Given the low temporal resolution of fMRI, it is uncertain whether this activation is associated with initial stimulus processing. Here, we used MEG in deaf and hearing subjects to evaluate whether auditory cortex, devoid of its normal input, comes to serve the visual modality early in the course of stimulus processing. In line with previous findings, visual activity was observed in the auditory cortex of deaf, but not hearing, subjects. This activity occurred within 100-400 ms of stimulus presentation and was primarily over the right hemisphere. These results add to the mounting evidence that removal of one sensory modality in humans leads to neural reorganization of the remaining modalities.  相似文献   

12.
Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta‐analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality‐dependent. Hum Brain Mapp, 36:304–312, 2015. © 2014 Wiley Periodicals, Inc .  相似文献   

13.
Spoken languages use one set of articulators -- the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.  相似文献   

14.
It has been proposed that the auditory cortex in the deaf humans might undergo task-specific reorganization. However, evidence remains scarce as previous experiments used only two very specific tasks (temporal processing and face perception) in visual modality. Here, congenitally deaf/hard of hearing and hearing women and men were enrolled in an fMRI experiment as we sought to fill this evidence gap in two ways. First, we compared activation evoked by a temporal processing task performed in two different modalities, visual and tactile. Second, we contrasted this task with a perceptually similar task that focuses on the spatial dimension. Additional control conditions consisted of passive stimulus observation. In line with the task specificity hypothesis, the auditory cortex in the deaf was activated by temporal processing in both visual and tactile modalities. This effect was selective for temporal processing relative to spatial discrimination. However, spatial processing also led to significant auditory cortex recruitment which, unlike temporal processing, occurred even during passive stimulus observation. We conclude that auditory cortex recruitment in the deaf and hard of hearing might involve interplay between task-selective and pluripotential mechanisms of cross-modal reorganization. Our results open several avenues for the investigation of the full complexity of the cross-modal plasticity phenomenon.SIGNIFICANCE STATEMENT Previous studies suggested that the auditory cortex in the deaf may change input modality (sound to vision) while keeping its function (e.g., rhythm processing). We investigated this hypothesis by asking deaf or hard of hearing and hearing adults to discriminate between temporally and spatially complex sequences in visual and tactile modalities. The results show that such function-specific brain reorganization, as has previously been demonstrated in the visual modality, also occurs for tactile processing. On the other hand, they also show that for some stimuli (spatial) the auditory cortex activates automatically, which is suggestive of a take-over by a different kind of cognitive function. The observed differences in processing of sequences might thus result from an interplay of task-specific and pluripotent plasticity.  相似文献   

15.
Giraud AL  Truy E 《Neuropsychologia》2002,40(9):1562-1569
Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.  相似文献   

16.
The quantity and quality of the language input that infants receive from their caregivers affects their future language abilities; however, it is unclear how variation in this input relates to preverbal brain circuitry. The current study investigated the relation between naturalistic language input and the functional connectivity (FC) of language networks in human infancy using resting-state functional magnetic resonance imaging (rsfMRI). We recorded the naturalistic language environments of five- to eight-month-old male and female infants using the Linguistic ENvironment Analysis (LENA) system and measured the quantity and consistency of their exposure to adult words (AWs) and adult–infant conversational turns (CTs). Infants completed an rsfMRI scan during natural sleep, and we examined FC among regions of interest (ROIs) previously implicated in language comprehension, including the auditory cortex, the left inferior frontal gyrus (IFG), and the bilateral superior temporal gyrus (STG). Consistent with theory of the ontogeny of the cortical language network (Skeide and Friederici, 2016), we identified two subnetworks posited to have distinct developmental trajectories: a posterior temporal network involving connections of the auditory cortex and bilateral STG and a frontotemporal network involving connections of the left IFG. Independent of socioeconomic status (SES), the quantity of CTs was uniquely associated with FC of these networks. Infants who engaged in a larger number of CTs in daily life had lower connectivity in the posterior temporal language network. These results provide evidence for the role of vocal interactions with caregivers, compared with overheard adult speech, in the function of language networks in infancy.  相似文献   

17.
To investigate neural plasticity resulting from early auditory deprivation and use of American Sign Language, we measured responses to visual stimuli in deaf signers, hearing signers, and hearing nonsigners using functional magnetic resonance imaging. We examined "compensatory hypertrophy" (changes in the responsivity/size of visual cortical areas) and "cross-modal plasticity" (changes in auditory cortex responses to visual stimuli). We measured the volume of early visual areas (V1, V2, V3, V4, and MT+). We also measured the amplitude of responses within these areas, and within the auditory cortex, to a peripheral visual motion stimulus that was attended or ignored. We found no major differences between deaf and hearing subjects in the size or responsivity of early visual areas. In contrast, within the auditory cortex, motion stimuli evoked significant responses in deaf subjects, but not in hearing subjects, in a region of the right auditory cortex corresponding to Brodmann's areas 41, 42, and 22. This hemispheric selectivity may be due to a predisposition for the right auditory cortex to process motion; earlier studies report a right hemisphere bias for auditory motion in hearing subjects. Visual responses within the auditory cortex of deaf subjects were stronger for attended than ignored stimuli, suggesting top-down processes. Hearing signers did not show visual responses in the auditory cortex, indicating that cross-modal plasticity can be attributed to auditory deprivation rather than sign language experience. The largest effects of auditory deprivation occurred within the auditory cortex rather than the visual cortex, suggesting that the absence of normal input is necessary for large-scale cortical reorganization to occur.  相似文献   

18.
Modulation of vocal pitch is a key speech feature that conveys important linguistic and affective information. Auditory feedback is used to monitor and maintain pitch. We examined induced neural high gamma power (HGP) (65–150 Hz) using magnetoencephalography during pitch feedback control. Participants phonated into a microphone while hearing their auditory feedback through headphones. During each phonation, a single real‐time 400 ms pitch shift was applied to the auditory feedback. Participants compensated by rapidly changing their pitch to oppose the pitch shifts. This behavioral change required coordination of the neural speech motor control network, including integration of auditory and somatosensory feedback to initiate change in motor plans. We found increases in HGP across both hemispheres within 200 ms of pitch shifts, covering left sensory and right premotor, parietal, temporal, and frontal regions, involved in sensory detection and processing of the pitch shift. Later responses to pitch shifts (200–300 ms) were right dominant, in parietal, frontal, and temporal regions. Timing of activity in these regions indicates their role in coordinating motor change and detecting and processing of the sensory consequences of this change. Subtracting out cortical responses during passive listening to recordings of the phonations isolated HGP increases specific to speech production, highlighting right parietal and premotor cortex, and left posterior temporal cortex involvement in the motor response. Correlation of HGP with behavioral compensation demonstrated right frontal region involvement in modulating participant's compensatory response. This study highlights the bihemispheric sensorimotor cortical network involvement in auditory feedback‐based control of vocal pitch. Hum Brain Mapp 37:1474‐1485, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

19.
Visual speech perception without primary auditory cortex activation   总被引:3,自引:0,他引:3  
Speech perception is conventionally thought to be an auditory function, but humans often use their eyes to perceive speech. We investigated whether visual speech perception depends on processing by the primary auditory cortex in hearing adults. In a functional magnetic resonance imaging experiment, a pulse-tone was presented contrasted with gradient noise. During the same session, a silent video of a talker saying isolated words was presented contrasted with a still face. Visual speech activated the superior temporal gyrus anterior, posterior, and lateral to the primary auditory cortex, but not the region of the primary auditory cortex. These results suggest that visual speech perception is not critically dependent on the region of primary auditory cortex.  相似文献   

20.
We will review converging evidence that language related symptoms of the schizophrenic syndrome such as auditory verbal hallucinations arise at least in part from processing abnormalities in posterior language regions. These language regions are either adjacent to or overlapping with regions in the (posterior) temporal cortex and temporo-parietal occipital junction that are part of a system for processing social cognition, emotion, and self representation or agency. The inferior parietal and posterior superior temporal regions contain multi-modal representational systems that may also provide rapid feedback and feed-forward activation to unimodal regions such as auditory cortex. We propose that the over-activation of these regions could not only result in erroneous activation of semantic and speech (auditory word) representations, resulting in thought disorder and voice hallucinations, but could also result in many of the other symptoms of schizophrenia. These regions are also part of the so-called “default network”, a network of regions that are normally active; and their activity is also correlated with activity within the hippocampal system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号