首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
工业技术   8篇
  2021年   1篇
  2019年   4篇
  2016年   1篇
  2013年   2篇
排序方式: 共有8条查询结果,搜索用时 0 毫秒
1
1.
Video conferencing provides an environment for multiple users linked on a network to have meetings. Since a large quantity of audio and video data are transferred to multiple users in real time, research into reducing the quantity of data to be transferred has been drawing attention. Such methods extract and transfer only the features of a user from video data and then reconstruct a video conference using virtual humans. The disadvantage of such an approach is that only the positions and features of hands and heads are extracted and reconstructed, whilst the other virtual body parts do not follow the user. In order to enable a virtual human to accurately mimic the entire body of the user in a 3D virtual conference, we examined what features should be extracted to express a user more clearly and how they can be reproduced by a virtual human. This 3D video conferencing estimates the user’s pose by comparing predefined images with a photographed user’s image and generates a virtual human that takes the estimated pose. However, this requires predefining a diverse set of images for pose estimation and, moreover, it is difficult to define behaviors that can express poses correctly. This paper proposes a framework to automatically generate the pose-images used to estimate a user’s pose and the behaviors required to present a user using a virtual human in a 3D video conference. The method for applying this framework to a 3D video conference on the basis of the automatically generated data is also described. In the experiment, the framework proposed in this paper was implemented in a mobile device. The generation process of poses and behaviors of virtual human was verified. Finally, by applying programming by demonstration, we developed a system that can automatically collect the various data necessary for a video conference directly without any prior knowledge of the video conference system.  相似文献   
2.
Multimedia Tools and Applications - Affective social multimedia computing provides us the opportunity to improve our daily lives. Various things, such as devices in ubiquitous computing...  相似文献   
3.
The Journal of Supercomputing - The automated enforcement systems (AESs), which detect speeding vehicles or vehicles that violate traffic signals, are installed and operated on urban arterial...  相似文献   
4.
Chae  Jeongsook  Jin  Yong  Wen  Mingyun  Zhang  Weiqiang  Sung  Yunsick  Cho  Kyungeun 《The Journal of supercomputing》2019,75(4):1909-1921
The Journal of Supercomputing - Recently, the diverse virtual reality devices are developed and utilized. Particularly, the devices that recognize the motions of users such as griping hands and...  相似文献   
5.
In wireless sensor networks (WSNs), sensor nodes and sink nodes communicate among themselves to collect and send data. Because of the volume of data transmission, it is important to minimize the power used for this communication. Q-learning can be applied to find the optimal path between two nodes. However, Q-learning suffers from having a significant learning time, meaning that the learning process must be conducted in advance in order to be applicable to WSNs. Many studies have proposed methods to decrease the learning time by reducing the state spaces and updating more Q-values at each update. Reducing the size of the Q-learning state space leads to an inability to execute the optimum action, because the correct state may not be available. Other methods utilize additional information by involving a teacher, control the time flow of Q-learning by considering the number of updates for each Q-value, or use a prioritized queue in the update procedure. Such methods are not well suited to the real-world environment and complexity of WSNs. A more suitable method involves updating the Q-values iteratively. A combination of these updating methods may enhance the reduction in learning time. This paper proposes a reward propagation method (RPM), i.e., a method that integrates various updating algorithms to propagate the reward of the goal state to more Q-values, thus reducing the learning time required for Q-learning. By not only updating the Q-value of the last visited state and executed action, but also updating the Q-values of unvisited states and unexecuted actions, the learning time can be reduced considerably. In this method, we integrate the following three Q-value updating algorithms. First, we incorporate the concept of $\text{ Q }(\lambda )$ -learning. This method iteratively propagates the terminal reward to the Q-values of the visited states until the terminal reward is received. If the terminal reward were to be propagated to Q-values of unvisited states, the learning time could be further reduced. Second, the concept of reward propagation is expanded. The previous, one-step reward update method updates any Q-values of states in which the terminal reward can be received by executing only one action after receiving the terminal reward. If the terminal reward is propagated to states for which the reward will only be received by executing more than one action, more Q-values can be updated. Third, we apply a type of fuzzy Q-learning with eligibility, which updates the Q-values of unexecuted actions. Even though this method is not utilized directly, the concept of updating the Q-values of unexecuted actions is applied. To investigate how much learning time is reduced with using the proposed method, we compare it with conventional Q-learning. Given that the optimal path problem of WSNs can be remodeled as a reinforcement learning problem, a hunter–prey capture game is utilized, which involves two agents in a grid environment. RPM, which plays the role of prey, learns to receive the terminal reward and escape from the hunter. The hunter selects one of two movement policies randomly, and executes one action based on the selected policy. To measure the difference between the success rates of conventional Q-learning and RPM, an equivalent environment and parameters are used for both these methods. We conduct three experiments: the first to compare RPM and conventional Q-learning, the second to test the scalability of RPM, and the third to evaluate the performance of RPM in a differently configured environment. The RPM results are compared with those of conventional Q-learning in terms of the success rate of receiving the terminal reward. This provides a measure of the difference in required learning times. The greatest reduction in learning time is obtained with a grid size of $10 \times 10$ , no obstacles, and 3,000 episodes to be learned. In these episodes, the success rate of RPM is 232 % higher than that of conventional Q-learning. We perform two experiments to verify the scalability of RPM by changing the size of the environment. In a $12 \times 12$ grid environment, RPM initially exhibits a maximum success rate 176 % higher than that of conventional Q-learning. However, as the size of the environment is increased, the effect of propagating the terminal rewards decreases, and the improvement in the success rate compared to conventional Q-learning decreases relative to the $10 \times 10$ grid environment. With a $14 \times 14$ grid environment, the relative effect of RPM declines further, giving a maximum success rate that is around 138 % higher than that of conventional Q-learning. The results of the scalability experiments show that increasing the size of the environment without changing the scope of terminal reward propagation causes a decrease in the success rate of RPM. However, RPM still exhibits a higher success rate than conventional Q-learning. The improvement in the peak success rate of between 138 % and 232 % can greatly reduce the learning time in difficult environments. If the amount of the calculation of terminal reward propagation is not a critical issue, given that the calculation amount is increased exponentially in proportion to the scope of terminal reward propagation, the learning time can be reduced further easily by increasing the scope. Finally, we compare the difference in success rates when obstacles are deployed within the $10 \times 10$ environment. The obstacles naturally degraded the performance of both RPM and conventional Q-learning, but the proposed method still outperforms conventional Q-learning by about 20 % to 59 %. Our experimental results show that the peak success rate of the proposed method is consistently superior to that of conventional Q-learning. Given that the size of environment and the number of obstacles effects the improvement of RPM comparing to conventional Q-learning, the improvement is in proportion to the number of more updated Q-values. If the size of environment is increased, the rate how much Q-values are more updated comparing to whole Q-values is decreased. Therefore, the effect of RPM is also decreased. More the number of obstacles is increased, less the number of Q-values are update, which also reduces the effect of RPM. As the learning time can be reduced by RPM, Q-learning can be applied to diverse fields in which the learning time problem affects its application to WSNs. Although a lot of researches of Q-learning are proposed to solve the learning time problem of Q-learning, the demand of reducing the learning time is still urgent. Given that one line of researches that reduce learning time by updating Q-value more is very active topic, we expect further expansion by combining more kinds of concepts to reduce learning time on RPM.  相似文献   
6.
Neural Computing and Applications - Deep learning improves the realistic expression of virtual simulations specifically to solve multi-criteria decision-making problems, which are generally rely on...  相似文献   
7.
The natural user interface (NUI) has been investigated in a variety of fields in application software. This paper proposes an approach to generate virtual agents that can support users for NUI-based applications through human–robot interaction (HRI) learning in a virtual environment. Conventional human–robot interaction (HRI) learning is carried out by repeating processes that are time-consuming, complicated and dangerous because of certain features of robots. Therefore, a method is needed to train virtual agents that interact with virtual humans imitating human movements in a virtual environment. Then the result of this virtual agent can be applied to NUI-based interactive applications after the interaction learning is completed. The proposed method was applied to a model of a typical house in virtual environment with virtual human performing daily-life activities such as washing, eating, and watching TV. The results show that the virtual agent can predict a human’s intent, identify actions that are helpful to the human, and can provide services 16 % faster than a virtual agent trained using traditional Q-learning.  相似文献   
8.

The data computing process is utilized in various areas such as autonomous driving. Autonomous vehicles are intended to detect and track nearby moving objects avoiding collisions and to navigate in complex situations, such as heavy traffic and dense pedestrian areas. Therefore, object tracking is the core technology in the environment perception systems of autonomous vehicles and requires the monitoring of surrounding objects and the prediction of the moving states of objects in real time. In this paper, a multiple object tracking method based on light detection and ranging (LiDAR) data is proposed by using a Kalman filter and data computing process. We suppose that the movements of the tracking objects are captured consecutively as frames; thus, model-based detection and tracking of dynamic objects are possible. A Kalman filter is applied for predicting posterior state of tracking object based on anterior state of the tracking object. State denotes the positions, shapes, and sizes of objects. By computing the likelihood probability between predicted tracking objects and clusters which registered from tracking objects, the data association process of the tracking objects can be generated. Experimental results showed enhanced object tracking performance in a dynamic environment. The average matching probability of the tracking object was greater than 92.9%.

  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号