首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Visual information is mapped with respect to the retina within the early stages of the visual cortex. On the other hand, the brain has to achieve a representation of object location in a coordinate system that matches the reference frame used by the motor cortex to code reaching movement in space. The mechanism of the necessary coordinate transformation between the different frames of reference from the visual to the motor system as well as its localization within the cerebral cortex is still unclear. Coordinate transformation is traditionally described as a series of elementary computations along the visuomotor cortical pathways, and the motor system is thought to receive target information in a body-centered reference frame. However, neurons along these pathways have a number of similar properties and receive common input signals, suggesting that a non-retinocentric representation of object location in space might be available for sensory and motor purposes throughout the visuomotor pathway. This paper reviews recent findings showing that elementary input signals, such as retinal and eye position signals, reach the dorsal premotor cortex. We will also compare eye position effects in the premotor cortex with those described in the posterior parietal cortex. Our main thesis is that appropriate sensory input signals are distributed across the visuomotor continuum, and could potentially allow, in parallel, the emergence of multiple and task-dependent reference frames. Received: 21 September 1998 / Accepted: 19 March 1999  相似文献   

2.
Neurophysiological studies suggest that the transformation of visual signals into arm movement commands does not involve a sequential recruitment of the various reach-related regions of the cerebral cortex but a largely simultaneous activation of these areas, which form a distributed and recurrent visuomotor network. However, little is known about how the reference frames used to encode reach-related variables in a given “node” of this network vary with the time taken to generate a behavioral response. Here we show that in an instructed delay reaching task, the reference frames used to encode target location in the parietal reach region (PRR) and area 5 of the posterior parietal cortex (PPC) do not evolve dynamically in time; rather the same spatial representation exists within each area from the time target-related information is first instantiated in the network until the moment of movement execution. As previously reported, target location was encoded predominantly in eye coordinates in PRR and in both eye and hand coordinates in area 5. Thus, the different computational stages of the visuomotor transformation for reaching appear to coexist simultaneously in the parietal cortex, which may facilitate the rapid adjustment of trajectories that are a hallmark of skilled reaching behavior.  相似文献   

3.
Neurons in the parietal reach region (PRR) have been implicated in the sensory-to-motor transformation required for reaching toward visually defined targets. The neurons in each cortical hemisphere might be specifically involved in planning movements of just one limb, or the PRR might code reach endpoints generically, independent of which limb will actually move. Previous work has shown that the preferred directions of PRR neurons are similar for right and left limb movements but that the amplitude of modulation may vary greatly. We now test the hypothesis that frames of reference and eye and hand gain field modulations will, like preferred directions, be independent of which hand moves. This was not the case. Many neurons show clear differences in both the frame of reference as well as in direction and strength of gain field modulations, depending on which hand is used to reach. The results suggest that the information that is conveyed from the PRR to areas closer to the motor output (the readout from the PRR) is different for each limb and that individual PRR neurons contribute either to controlling the contralateral-limb or else bimanual-limb control.  相似文献   

4.
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching. Received: 25 November 1998 / Accepted: 8 July 1999  相似文献   

5.
A central problem in motor research has been to understand how sensory signals are transformed to generate a goal-directed movement. This problem has been formulated as a set of coordinate transformations that begins with an extrinsic coordinate frame representing the spatial location of a target and ends with an intrinsic coordinate frame describing muscle activation patterns. Insight into this process of sensorimotor transformation can be gained by examining the coordinate frames of neuronal activity in interconnected regions of the brain. We recorded the activity of neurons in primary motor cortex (M1) and ventral premotor cortex (PMv) in a monkey trained to perform a task which dissociates three major coordinate frames of wrist movement: muscle, wrist joint, and an extrinsic coordinate frame. We found three major types of neurons in M1 and PMv. The first type was termed 'extrinsic-like'. The activity of these neurons appeared to encode the direction of movement in space independent of the patterns of wrist muscle activity or joint movement that produced the movements. The second type was termed 'extrinsic-like with gain modulation'. The activity of these neurons appeared to encode the direction of movement in space, but the magnitude (gain) of neuronal activity depended on the posture of the forearm. The third type was termed 'muscle-like' since their activity co-varied with muscle activity. The great majority of the directionally-tuned neurons in the PMv were classified as 'extrinsic-like' (48/59, 81%). A smaller group was classified as 'extrinsic-like with gain modulation' (7/59, 12%). In M1, the three types of neurons were more equally represented. Our results raise the possibility that cortical processing between M1 and PMv may contribute to a sensorimotor transformation between extrinsic and intrinsic coordinate frames. Recent modeling studies have demonstrated the computational plausibility of such a process.  相似文献   

6.
Dynamic representation of eye position in the parieto-occipital sulcus. Area V6A, on the anterior bank of the parieto-occipital sulcus of the monkey brain, contains neurons sensitive both to visual stimulation and to the position and movement of the eyes. We examined the effects of eye position and eye movement on the activity of V6A neurons in monkeys trained to saccade to and fixate on target locations. Forty-eight percent of the neurons responded during these tasks. The responses were not caused by the visual stimulation of the fixation light because extinguishing the fixation light had no effect. Instead the neurons responded in relation to the position of the eye during fixation. Some neurons preferred a restricted range of eye positions, whereas others had more complex and distributed eye-position fields. None of these eye-related neurons responded before or during saccades. They all responded postsaccadically during fixation on the target location. However, the neurons did not simply encode the static position of the eyes. Instead most (88%) responded best after the eye saccaded into the eye-position field and responded significantly less well when the eye made a saccade that was entirely contained within the eye-position field. Furthermore, for many eye-position cells (45%), the response was greatest immediately after the eye reached the preferred position and was significantly reduced after 500 ms of fixation. Thus these neurons preferentially encoded the initial arrival of the eye into the eye-position field rather than the continued presence or the movement of the eye within the eye-position field. Area V6A therefore contains a representation of the position of the eye in the orbit, but this representation appears to be dynamic, emphasizing the arrival of the eye at a new position.  相似文献   

7.
In what frame of reference does the supplementary eye field (SEF) encode saccadic eye movements? In this study, the "saccade collision" test was used to determine whether a saccade electrically evoked in the monkey's SEF is programmed to reach an oculocentric goal or a nonoculocentric (e.g., head or body-centered) goal. If the eyes start moving just before or when an oculocentric goal is imposed by electrical stimulation, the trajectory of the saccade to that goal should compensate for the ongoing movement. Conversely, if the goal imposed by electrical stimulation is nonoculocentric, the trajectory of the evoked saccade should not be altered. In head-fixed experiments, we mapped the trajectories of evoked saccades while the monkey fixated at each of 25 positions 10 degrees apart in a 40 x 40 degrees grid. For each studied SEF site, we calculated convergences indices and found that "convergent" and "nonconvergent" sites were separately clustered: nonconvergent rostral to convergent. Then, the "saccade collision" test was systematically applied. We found compensation at sites where saccades were of the nonconvergent type and practically no compensation at sites where saccades were of the convergent type. The results indicate that the SEF can encode saccade goals in at least two frames of reference and suggest a rostrocaudal segregation in the representation of these two modes.  相似文献   

8.
In the intermediate and deep layers of the superior colliculus (SC), a well-established oculomotor structure, a substantial population of cells is involved in the control of arm movements. To examine the reference frame of these neurons, we recorded in two rhesus monkeys (Macaca mulatta) the discharges of 331 neurons in the SC and the underlying mesencephalic reticular formation (MRF) while monkeys reached to the same target location during different gaze orientations. For 65 reach-related cells with sufficient data and for simultaneously recorded electromyograms (EMGs) of 11 arm muscles, we calculated an ANOVA (factors: target position, gaze angle) and a gaze-dependency (GD) index. EMGs and the activity of many (60%) of the reach-related neurons were not influenced by the target representation on the retina or eye position. We refer to these as "gaze-independent" reach neurons. For 40%, however, the GD fell outside the range of the muscle modulation, and the ANOVA showed a significant influence of gaze. These "gaze-related" reach neurons discharge only when the monkey reaches for targets having specific coordinates in relation to the gaze axis, i.e., for targets in a gaze-related "reach movement field" (RMF). Neuronal activity was not modulated by the specific path of the arm movement, the muscle pattern that is necessary for its realization or the arm that was used for the reach. In each SC we found gaze-related neurons with RMFs both in the contralateral and in the ipsilateral hemifield. The topographical organization of the gaze-related reach neurons in the SC could not be matched with the well-known visual and oculomotor maps. Gaze-related neurons were more modulated in their strength of activity with different directions of arm movements than were gaze-independent reach neurons. Gaze-related reach neurons were recorded at a median depth of 2.03 mm below SC surface in the intermediate layers, where they overlap with saccade-related burst neurons (median depth: 1.55 mm). Most of the gaze-independent reach cells were found in a median depth of 4.01 mm below the SC surface in the deep layers and in the underlying MRF. The gaze-related reach neurons operating in a gaze-centered coordinate system could signal either the desired target position with respect to gaze direction or the motor error between gaze axis and reach target. The gaze-independent reach neurons, possibly operating in a shoulder- or arm-centered reference frame, might carry signals closer to motor output. Together these two types of reach neurons add evidence to our hypothesis that the SC is involved in the sensorimotor transformation for eye-hand coordination in primates.  相似文献   

9.
Reach movement planning involves the representation of spatial target information in different reference frames. Neurons at parietal and premotor stages of the cortical sensorimotor system represent target information in eye- or hand-centered reference frames, respectively. How the different neuronal representations affect behavioral parameters of motor planning and control, i.e. which stage of neural representation is relevant for which aspect of behavior, is not obvious from the physiology. Here, we test with a behavioral experiment if different kinematic movement parameters are affected to a different degree by either an eye- or hand-reference frame. We used a generalized anti-reach task to test the influence of stimulus-response compatibility (SRC) in eye- and hand-reference frames on reach reaction times, movement times, and endpoint variability. While in a standard anti-reach task, the SRC is identical in the eye- and hand-reference frames, we could separate SRC for the two reference frames. We found that reaction times were influenced by the SRC in eye- and hand-reference frame. In contrast, movement times were only influenced by the SRC in hand-reference frame, and endpoint variability was only influenced by the SRC in eye-reference frame. Since movement time and endpoint variability are the result of planning and control processes, while reaction times are consequences of only the planning process, we suggest that SRC effects on reaction times are highly suited to investigate reference frames of movement planning, and that eye- and hand-reference frames have distinct effects on different phases of motor action and different kinematic movement parameters.  相似文献   

10.
The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with approximately 10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.  相似文献   

11.
Previous findings suggest the posterior parietal cortex (PPC) contributes to arm movement planning by transforming target and limb position signals into a desired reach vector. However, the neural mechanisms underlying this transformation remain unclear. In the present study we examined the responses of 109 PPC neurons as movements were planned and executed to visual targets presented over a large portion of the reaching workspace. In contrast to previous studies, movements were made without concurrent visual and somatic cues about the starting position of the hand. For comparison, a subset of neurons was also examined with concurrent visual and somatic hand position cues. We found that single cells integrated target and limb position information in a very consistent manner across the reaching workspace. Approximately two-thirds of the neurons with significantly tuned activity (42/61 and 30/46 for left and right workspaces, respectively) coded targets and initial hand positions separably, indicating no hand-centered encoding, whereas the remaining one-third coded targets and hand positions inseparably, in a manner more consistent with the influence of hand-centered coordinates. The responses of both types of neurons were largely invariant with respect to the presence or absence of visual hand position cues, suggesting their corresponding coordinate frames and gain effects were unaffected by cue integration. The results suggest that the PPC uses a consistent scheme for computing reach vectors in different parts of the workspace that is robust to changes in the availability of somatic and visual cues about hand position.  相似文献   

12.
We previously reported that the kinematics of reaching movements reflect the superimposition of two separate control mechanisms specifying the hand's spatial trajectory and its final equilibrium position. We now asked whether the brain maintains separate representations of the spatial goals for planning hand trajectory and final position. One group of subjects learned a 30 degrees visuomotor rotation about the hand's starting point while performing a movement reversal task ("slicing") in which they reversed direction at one target and terminated movement at another. This task required accuracy in acquiring a target mid-movement. A second group adapted while moving to -- and stabilizing at -- a single target ("reaching"). This task required accuracy in specifying an intended final position. We examined how learning in the two tasks generalized both to movements made from untrained initial positions and to movements directed toward untrained targets. Shifting initial hand position had differential effects on the location of reversals and final positions: Trajectory directions remained unchanged and reversal locations were displaced in slicing whereas final positions of both reaches and slices were relatively unchanged. Generalization across directions in slicing was consistent with a hand-centered representation of desired reversal point as demonstrated previously for this task whereas the distributions of final positions were consistent with an eye-centered representation as found previously in studies of pointing in three-dimensional space. Our findings indicate that the intended trajectory and final position are represented in different coordinate frames, reconciling previous conflicting claims of hand-centered (vectorial) and eye-centered representations in reach planning.  相似文献   

13.
Auditory spatial information arises in a head-centered coordinate frame, whereas the saccade command signals generated by the superior colliculus (SC) are thought to specify target locations in an eye-centered frame. However, auditory activity in the SC appears to be neither head- nor eye-centered but in a reference frame that is intermediate between both of these reference frames. This neurophysiological finding suggests that auditory saccades might not fully compensate for changes in initial eye position. Here, we investigated whether the accuracy of saccades to sounds is affected by initial eye position in rhesus monkeys. We found that, on average, a 12 degrees horizontal shift in initial eye position produced only a 0.6 to 1.6 degrees horizontal shift in the endpoints of auditory saccades made to targets at a range of locations along the horizontal meridian. This shift was similar in size to the modest influence of eye position on visual saccades. This virtually complete compensation for initial eye position implies that auditory activity in the SC is read out in a manner that is appropriate for generating accurate saccades to sounds.  相似文献   

14.
Canceling a pending movement is a hallmark of voluntary behavioral control because it allows us to quickly adapt to unattended changes either in the external environment or in our thoughts. The countermanding paradigm allows the study of inhibitory processes of motor acts by requiring the subject to withhold planned movements in response to an infrequent stop-signal. At present the neural processes underlying the inhibitory control of arm movements are mostly unknown. We recorded the activity of single units in the rostral and caudal portion of the dorsal premotor cortex (PMd) of monkeys trained in a countermanding reaching task. We found that among neurons with a movement-preparatory activity, about one-third exhibit a modulation before the behavioral estimate of the time it takes to cancel a planned movement. Hence these neurons exhibit a pattern of activity suggesting that PMd plays a critical role in the brain networks involved in the control of arm movement initiation and suppression.  相似文献   

15.
Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.  相似文献   

16.
We compared neuronal activity in the dorsal and ventral premotor areas (PMd and PMv, respectively) when monkeys were preparing to perform arm-reaching movements in a motor-set period before their actual execution. They were required to select one of four possible movements (reaching to a target on the left or right, using either the left or right arm) in accordance with two sets of instruction cues, followed by a delay period, and a subsequent motor-set period. During the motor-set period, the monkeys were required to get ready for a movement-trigger signal to start the arm-reach promptly. We analyzed the activity of 211 PMd and 109 PMv neurons that showed selectivity for the combination of the two instruction cues during the motor-set period. A majority (53%) of PMd neurons exhibited activity significantly tuned to both target location and arm use, and an approximately equal number of PMd neurons showed selectivity to either forthcoming arm use or target location. In contrast, 60% of PMv neurons showed selectivity for target location only and not for arm use. These findings point to preference in the use of neuronal activity in the two areas: preparation for action in the PMd and preparation for target acquisition in the PMv.  相似文献   

17.
Flexible strategies for sensory integration during motor planning   总被引:10,自引:0,他引:10  
When planning target-directed reaching movements, human subjects combine visual and proprioceptive feedback to form two estimates of the arm's position: one to plan the reach direction, and another to convert that direction into a motor command. These position estimates are based on the same sensory signals but rely on different combinations of visual and proprioceptive input, suggesting that the brain weights sensory inputs differently depending on the computation being performed. Here we show that the relative weighting of vision and proprioception depends both on the sensory modality of the target and on the information content of the visual feedback, and that these factors affect the two stages of planning independently. The observed diversity of weightings demonstrates the flexibility of sensory integration and suggests a unifying principle by which the brain chooses sensory inputs so as to minimize errors arising from the transformation of sensory signals between coordinate frames.  相似文献   

18.
The place-specific activity of hippocampal cells provides downstream structures with information regarding an animal's position within an environment and, perhaps, the location of goals within that environment. In rodents, recent research has suggested that distal cues primarily set the orientation of the spatial representation, whereas the boundaries of the behavioral apparatus determine the locations of place activity. The current study was designed to address possible biases in some previous research that may have minimized the likelihood of observing place activity bound to distal cues. Hippocampal single-unit activity was recorded from six freely moving rats as they were trained to perform a tone-initiated place-preference task on an open-field platform. To investigate whether place activity was bound to the room- or platform-based coordinate frame (or both), the platform was translated within the room at an "early" and at a "late" phase of task acquisition (Shift 1 and Shift 2). At both time points, CA1 and CA3 place cells demonstrated room-associated and/or platform-associated activity, or remapped in response to the platform shift. Shift 1 revealed place activity that reflected an interaction between a dominant platform-based (proximal) coordinate frame and a weaker room-based (distal) frame because many CA1 and CA3 place fields shifted to a location intermediate to the two reference frames. Shift 2 resulted in place activity that became more strongly bound to either the platform- or room-based coordinate frame, suggesting the emergence of two independent spatial frames of reference (with many more cells participating in platform-based than in room-based representations).  相似文献   

19.
To test the functional implications of gaze signals that we previously reported in the dorsal premotor cortex (PMd), we trained two rhesus monkeys to point to visual targets presented on a touch screen while controlling their gaze orientation. Each monkey had to perform four different tasks. To initiate a trial, the monkey had to put his hand on a starting position at the center of the touch screen and fixate a fixation point. In one task, the animal had to make a reaching movement to a peripheral target randomly presented at one of eight possible locations on a circle while maintaining fixation at the center of this virtual circle (central fixation + reaching). In the second task, the monkey maintained fixation at the location of the upcoming peripheral target and, later, reached to that location. After a delay, the target was turned on and the monkey made a reaching arm movement (target fixation + reaching). In the third task, the monkey made a saccade to the target without any arm movement (saccade). Finally, in the fourth task, the monkey first made a saccade to the target, then reached to it after a delay (saccade + reaching). This design allowed us to examine the contribution of the oculomotor context to arm-related neuronal activity in PMd. We analyzed the effects of the task type on neuronal activity and found that many cells showed a task effect during the signal (26/60; 43%), set (16/49; 33%) and/or movement (15/54; 28%) epochs, depending on the oculomotor history. These findings, together with previously published data, suggest that PMd codes limb-movement direction in a gaze-dependent manner and may, thus, play an important role in the brain mechanisms of eye-hand coordination during visually guided reaching. Received: 10 September 1998 / Accepted: 19 March 1999  相似文献   

20.
The selection of one of two visual stimuli as a target for a motor action may depend on external as well as internal variables. We examined whether the preference to select a leftward or rightward target depends on the action that is performed (eye or arm movement) and to what extent the choice is influenced by the target location. Two targets were presented at the same distance to the left and right of a fixation position and the stimulus onset asynchrony (SOA) was adjusted until both targets were selected equally often. This balanced SOA time is then a quantitative measure of selection preference. In two macaque monkeys tested, we found the balanced SOA shifted to the left side for left-arm movements and to the right side for right-arm movements. Target selection strongly depended on the horizontal target location. By varying eye, head, and trunk position, we found this dependency embedded in a head-centered behavioral reference frame for saccade targets and, somewhat counter-intuitively, for reach targets as well. Target selection for reach movements was influenced by the eye position, while saccade target selection was unaffected by the arm position. These findings suggest that the neural processes underlying target selection for a reaching movement are to a large extent independent of the coordinate frame ultimately used to make the limb movement, but are instead closely linked to the coordinate frame used to plan a saccade to that target. This similarity may be indicative of a common spatial framework for hand-eye coordination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号