首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   1篇
工业技术   18篇
  2022年   1篇
  2021年   1篇
  2017年   1篇
  2016年   3篇
  2015年   3篇
  2013年   2篇
  2010年   1篇
  2009年   1篇
  2002年   1篇
  2000年   3篇
  1991年   1篇
排序方式: 共有18条查询结果,搜索用时 15 毫秒
11.
Augmented reality has been on the rise due to the proliferation of mobile devices. At the same time, object recognition has also come to the fore. In particular, many studies have focused on object recognition based on markerless matching. However, most of these studies have focused on desktop systems, which can have high performance in terms of CPU and memory, rather than investigating the use of mobile systems, which have been previously unable to provide high-performance object recognition based on markerless matching. In this paper, we propose a method that uses the OpenCV mobile library to improve real-time object recognition performance on mobile systems. First, we investigate the original object recognition algorithm to identify performance bottlenecks. Second, we optimize the algorithm by analyzing each module and applying appropriate code enhancements. Last, we change the operational structure of the algorithm to improve its performance, changing the execution frequency of the object recognition task from every frame to every four frames for real-time operation. During the three frames in which the original method is not executed, the object is instead recognized using the mobile devices accelerometer. We carry out experiments to reveal how much each aspect of our method improves the overall object recognition performance; overall, experimental performance improves by approximately 800 %, with a corresponding reduction of approximately 1 % in object recognition accuracy. Therefore, the proposed technique can be used to significantly improve the performance of object recognition based on markerless matching on mobile systems for real-time operation.  相似文献   
12.
A model for overlapped operation between the control unit (CU) and processing elements (PEs) in an SIMD machine is presented. The major requirements and structure of the CU for overlapped operation in SIMD mode are described and overlapped operation is formally defined. To use the computing power of both the CU and the PEs most effectively to execute a single program, a balanced work load between the CU and PEs is required. It is assumed that certain computations (e.g., the manipulation of loop index variables, PE-common array index calculations) can be migrated from the PEs to the CU and vice versa.This research demonstrates how to increase the effectiveness of an SIMD machine by allowing overlapped operation between the CU and PEs. The best overlapping can be achieved ideally by assigning an equal amount of work to be executed concurrently on the CU and PEs, resulting in a 2N speedup for an N-PE system. The goal of this research is to develop a model of overlapped operation in SIMD mode so that the actual maximum possible performance of the SIMD machine can be attained.  相似文献   
13.
The Journal of Supercomputing - This research is to design an effective prefetching method required for hybrid main memory systems consisting of dynamic random-access memory (DRAM) and phase-change...  相似文献   
14.
A new cache architecture based on temporal and spatial locality   总被引:5,自引:0,他引:5  
A data cache system is designed as low power/high performance cache structure for embedded processors. Direct-mapped cache is a favorite choice for short cycle time, but suffers from high miss rate. Hence the proposed dual data cache is an approach to improve the miss ratio of direct-mapped cache without affecting this access time. The proposed cache system can exploit temporal and spatial locality effectively by maximizing the effective cache memory space for any given cache size. The proposed cache system consists of two caches, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size. Temporal locality is utilized by caching candidate small blocks selectively into the direct-mapped cache. Also spatial locality can be utilized aggressively by fetching multiple neighboring small blocks whenever a cache miss occurs. According to the results of comparison and analysis, similar performance can be achieved by using four times smaller cache size comparing with the conventional direct-mapped cache.And it is shown that power consumption of the proposed cache can be reduced by around 4% comparing with the victim cache configuration.  相似文献   
15.
16.
In order to guarantee both performance and programmability demands in 3D graphics applications, vector and multithreaded SIMD architectures have been employed in recent graphics processing units. This paper introduces a novel instruction-systolic array architecture, which transfers an instruction stream in a pipelined fashion to efficiently share the expensive functional resources of a graphics processor. Specifically, cache misses and dynamic branches can cause additional latencies and complicated management in these parallel architectures. To address this problem, we combine a systolic execution scheme with on-demand warp activation that handles cache miss latency and branch divergence efficiently without significantly increasing hardware resources, either in terms of logic or register space. Simulation indicates that the proposed architecture offers 25% better performance than a traditional SIMD architecture with the same resources, and requires significantly fewer resources to match the performance of a typical modern vector multi-threaded GPU architecture.  相似文献   
17.
For the ubiquitous computing environment, an important assumption is that all the components in any specific environment are connected with each other. With this assumption, we introduce an effective scheme to provide a personalized service based on Virtual Personal World (VPW). Virtual Personal World (VPW) which is a model focused on service continuity with specially designed components. Previous ubiquitous frameworks have been concerned with the location where a user is provided any specific service. However those questions above are not the most important problems anymore in VPW. It concentrates on the point whether the services are successive or not, wherever a user goes to any place. Services are not regarded as a sum of functions which is embedded on any objects in any certain place. We conceptually define a resource management scheme based on a unified form of the object which participates in service provision, so called virtual object (VO). Thus the service can be described as the sum of functions of VOs. With our resource management scheme, users can utilize their required object as VO wherever it is located. And also, for better utilization of VPW service, we introduce a novel form of profiles and service provision scheme based on the polymorphism. Our simulation result shows that the Ratio of VPW pure service time is 0.15 % higher than conventional location based service. And also the possibility that users can meet adequate service he wants raised 29 % in our proposed VPW environment.  相似文献   
18.
Kim  Jeong-Geun  Jo  Yoon-Su  Yoon  Su-Kyung  Kim  Shin-Dug 《The Journal of supercomputing》2021,77(11):12924-12952
The Journal of Supercomputing - This research is to design a history table-based linear analysis method for a DRAM-PCM (phase change memory) hybrid memory system supporting irregular and complex...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号