首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   80篇
  免费   0篇
工业技术   80篇
  2023年   1篇
  2021年   1篇
  2020年   1篇
  2014年   2篇
  2013年   3篇
  2012年   2篇
  2011年   3篇
  2009年   2篇
  2008年   6篇
  2007年   1篇
  2006年   6篇
  2005年   8篇
  2004年   4篇
  2003年   5篇
  2002年   1篇
  2001年   5篇
  2000年   2篇
  1999年   4篇
  1998年   4篇
  1997年   2篇
  1996年   3篇
  1995年   1篇
  1993年   1篇
  1992年   1篇
  1991年   2篇
  1990年   1篇
  1988年   1篇
  1984年   1篇
  1983年   1篇
  1978年   2篇
  1974年   1篇
  1971年   1篇
  1970年   1篇
排序方式: 共有80条查询结果,搜索用时 15 毫秒
1.
Iterative decoders such as turbo decoders have become integral components of modern broadband communication systems because of their ability to provide substantial coding gains. A key computational kernel in iterative decoders is the maximum a posteriori probability (MAP) decoder. The MAP decoder is recursive and complex, which makes high-speed implementations extremely difficult to realize. In this paper, we present block-interleaved pipelining (BIP) as a new high-throughput technique for MAP decoders. An area-efficient symbol-based BIP MAP decoder architecture is proposed by combining BIP with the well-known look-ahead computation. These architectures are compared with conventional parallel architectures in terms of speed-up, memory and logic complexity, and area. Compared to the parallel architecture, the BIP architecture provides the same speed-up with a reduction in logic complexity by a factor of M, where M is the level of parallelism. The symbol-based architecture provides a speed-up in the range from 1 to 2 with a logic complexity that grows exponentially with M and a state metric storage requirement that is reduced by a factor of M as compared to a parallel architecture. The symbol-based BIP architecture provides speed-up in the range M to 2M with an exponentially higher logic complexity and a reduced memory complexity compared to a parallel architecture. These high-throughput architectures are synthesized in a 2.5-V 0.25-/spl mu/m CMOS standard cell library and post-layout simulations are conducted. For turbo decoder applications, we find that the BIP architecture provides a throughput gain of 1.96 at the cost of 63% area overhead. For turbo equalizer applications, the symbol-based BIP architecture enables us to achieve a throughput gain of 1.79 with an area savings of 25%.  相似文献   
2.
Scalable Parallel Algorithms for FPT Problems   总被引:4,自引:0,他引:4  
Algorithmic methods based on the theory of fixed-parameter tractability are combined with powerful computational platforms to launch systematic attacks on combinatorial problems of significance. As a case study, optimal solutions to very large instances of the NP-hard vertex cover problem are computed. To accomplish this, an efficient sequential algorithm and various forms of parallel algorithms are devised, implemented, and compared. The importance of maintaining a balanced decomposition of the search space is shown to be critical to achieving scalability. Target problems need only be amenable to reduction and decomposition. Applications in high throughput computational biology are also discussed.  相似文献   
3.
Ion-beam machining of millimeter scale optics   总被引:7,自引:0,他引:7  
An ion-beam microcontouring process is developed and implemented for figuring millimeter scale optics. Ion figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target substrate to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional machining processes, are avoided. Ion-beam figuring is presented as an alternative for the final figuring of small (<1-mm) optical components. The depth of the material removed by an ion beam is a convolution between the ion-beam shape and an ion-beam dwell function, defined over a two-dimensional area of interest. Therefore determination of the beam dwell function from a desired material removal map and a known steady beam shape is a deconvolution process. A wavelet-based algorithm has been developed to model the deconvolution process in which the desired removal contours and ion-beam shapes are synthesized numerically as wavelet expansions. We then mathematically combined these expansions to compute the dwell function or the tool path for controlling the figuring process. Various models have been developed to test the stability of the algorithm and to understand the critical parameters of the figuring process. The figuring system primarily consists of a duo-plasmatron ion source that ionizes argon to generate a focused (~200-mum FWHM) ion beam. This beam is rastered over the removal surface with a perpendicular set of electrostatic plates controlled by a computer guidance system. Experimental confirmation of ion figuring is demonstrated by machining a one-dimensional sinusoidal depth profile in a prepolished silicon substrate. This profile was figured to within a rms error of 25 nm in one iteration.  相似文献   
4.
Five varieties of cotton seed oils from Jayadhar, Bhagya, Mysore Vijaya, Hampi and 170 CO2 have been analysed for their fatty acid content and the results (Wt.%) are in the following range: Myristic 0.8–1.1; Palmitic 23.0–23.9; Stearic 2.7–4.2; Arachidic 0.3–0.7; Behenic 0.3–1.4; Oleic 11.9–22.8; Linoleic 47.5–58.1; and Cyclopropenoid acids 0.6–2.1. The oil content and iodine value from 20.2–22.5 percent and 104.3–115.3 respectively. Protein content ranges from 24.8–48.4 percent.  相似文献   
5.
In this paper, we propose a framework for low-energy digital signal processing (DSP), where the supply voltage is scaled beyond the critical voltage imposed by the requirement to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup that comprises of the DSP architecture operating at subcritical voltage and the error control scheme is referred to as soft DSP. The effectiveness of the proposed scheme is enhanced when arithmetic units with a higher "delay imbalance" are employed. A prediction-based error-control scheme is proposed to enhance the performance of the filtering algorithm in the presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60-81% reduction in energy dissipation for filter bandwidths up to 0.5 π (where 2 π corresponds to the sampling frequency fs) over that achieved via conventional architecture and voltage scaling, with a maximum of 0.5-dB degradation in the output signal-to-noise ratio (SNRo). It is also shown that the proposed algorithmic noise-tolerance schemes can also be used to improve the performance of DSP algorithms in presence of bit-error rates of up to 10-3 due to deep submicron (DSM) noise  相似文献   
6.
7.
Deciphering the genome of the fruitfly, Drosophila melanogaster, has revealed 39 genes coding for putative odorant-binding proteins (OBPs), more than are known at present for any other insect species. Using specific antibodies, the expression mosaic of five such OBPs (OS-E, OS-F, LUSH, PBPRP2, PBPRP5) on the antenna and maxillary palp has been mapped in the electron microscope. It was found that (1) OBP expression does correlate with morphological sensillum types and subtypes, (2) several OBPs may be co-localized in the same sensillum, and (3) OBP localization is not restricted to olfactory sensilla. The expression of PBPRP2 in antennal epidermis sheds some light on the possible evolution of OBPs.  相似文献   
8.
We present low-power and high-speed algorithms and architectures for complex adaptive filters. These architectures have been derived via the application of algebraic and algorithm transformations. The strength reduction transformation is applied at the algorithmic level as opposed to the traditional application at the architectural level. This results in a power reduction by 21% as compared with the traditional cross-coupled structure. A fine-grained pipelined architecture for the strength-reduced algorithm is then developed via the relaxed lookahead transformation. This technique, which is an approximation of the conventional lookahead computation, maintains the functionality of the algorithm rather than the input-output behavior. Convergence analysis of the proposed architecture is presented and supported via simulation results. The pipelined architecture allows high-speed operation with negligible hardware overhead. It also enables an additional power savings of 39 to 69% when combined with power-supply reduction. Thus, an overall power reduction ranging from 60-90% over the traditional cross-coupled architecture is achieved. The proposed architecture is then employed as a receive equalizer in a communication system for a data rate of 51.84 Mb/s over 100 m of UTP-3 wiring in an ATM-LAN environment. Simulation results indicate that speedups of up to 156 can be achieved with about a 0.8-dB loss in performance  相似文献   
9.
10.
The directional distribution of the ambient neutron dose equivalent from 145-MeV (19)F projectiles bombarding a thick aluminium target is measured and analysed. The measurements are carried out with a commercially available dose equivalent meter at 0°, 30°, 60° and 90° with respect to the beam direction. The experimental results are compared with calculated doses from EMPIRE nuclear reaction code and different empirical formulations proposed by others. The results are also compared with the measured data obtained from an earlier experiment at a lower projectile energy of 110 MeV for the same target-projectile combination.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号