首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   85篇
  免费   1篇
  国内免费   2篇
工业技术   88篇
  2021年   1篇
  2020年   1篇
  2018年   1篇
  2017年   6篇
  2016年   2篇
  2015年   6篇
  2014年   13篇
  2013年   3篇
  2012年   3篇
  2011年   11篇
  2010年   3篇
  2009年   8篇
  2008年   4篇
  2007年   3篇
  2006年   4篇
  2005年   2篇
  2004年   1篇
  2003年   4篇
  2002年   2篇
  2001年   2篇
  1999年   1篇
  1997年   2篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1989年   1篇
  1988年   1篇
排序方式: 共有88条查询结果,搜索用时 15 毫秒
1.
The particle based Discrete Element Method (DEM) can be applied to examine comminution processes. In this study, a DEM framework has been extended to model particle breakage without mass loss. After a breakage event occurs, spherical particles, as often considered in the DEM, are replaced by size reduced spherical fragments. During the following time steps, the fragments grow to their desired sizes, so that the mass loss can be counterbalanced. Previously defined overlaps with adjacent unbroken and broken particles (fragments) as well as walls are allowed. The breakage model has been realized in a parallelized DEM framework because comminution processes are often attributed to large numbers of particles and by parallelization the computational time can be reduced efficiently. An oedometer (one-dimensional compression in axial direction of a confined particle bed) has been modelled to investigate the parallelization efficiency and the influence of the permitted overlaps during the growth process on the growth duration. A simplified roller mill has been considered to examine the applicability of the breakage procedure considering parallelization. The results show that parallelization reduces computational time considerably. The breakage procedure is suitable to model comminution processes involving even densely packed particle systems and is superior to existing approaches.  相似文献   
2.
Image quality assessment is an indispensable in computer vision applications, such as image classification, image parsing. With the development of Internet, image data acquisition becomes more conveniently. However, image distortion is inevitable due to imperfect image acquisition system, image transmission medium and image recording equipment. Traditional image quality assessment algorithms only focus on low-level visual features such as color or texture, which could not encode high-level features effectively. CNN-based methods have shown satisfactory results in image quality assessment. However, existing methods have problems such as incomplete feature extraction, partial image block distortion, and inability to determine scores. So in this paper, we propose a novel framework for image quality assessment based on deep learning. We incorporate both low-level visual features and high-level semantic features to better describe images. And image quality is analyzed in a parallel processing mode. Experiments are conducted on LIVE and TID2008 datasets demonstrate the proposed model can predict the quality of the distorted image well, and both SROCC and PLCC can reach 0.92 or higher.  相似文献   
3.
We investigate a parallelized divide-and-conquer approach based on a self-organizing map (SOM) in order to solve the Euclidean traveling salesman problem (TSP). Our approach consists of dividing cities into municipalities, evolving the most appropriate solution from each municipality so as to find the best overall solution and, finally, joining neighborhood municipalities by using a blend operator to identify the final solution. We evaluate performance of parallelized approach over standard TSP test problems (TSPLIB) to show that our approach gives a better answer in terms of quality and time rather than the sequential evolutionary SOM.  相似文献   
4.
We present new parallelization and memory-reducing strategies for the graph-theoretic color-coding approximation technique, with applications to biological network analysis. Color-coding is a technique that gives fixed parameter tractable algorithms for several well-known NP-hard optimization problems. In this work, by efficiently parallelizing steps in color-coding, we create two new biological protein interaction network analysis tools: Fascia for subgraph counting and motif finding and FastPath for signaling pathway detection. We demonstrate considerable speedup over prior work, and the optimizations introduced in this paper can also be used for other problems where color-coding is applicable.  相似文献   
5.
This paper considers the Bus Terminal Location Problem (BTLP) which incorporates characteristics of both the p-median and maximal covering problems. We propose a parallel variable neighborhood search algorithm (PVNS) for solving BTLP. Improved local search, based on efficient neighborhood interchange, is used for the p-median problem, and is combined with a reduced neighborhood size for the maximal covering part of the problem. The proposed parallel algorithm is compared with its non-parallel version. Parallelization yielded significant time improvement in function of the processor core count. Computational results show that PVNS improves all existing results from the literature, while using significantly less time. New larger instances, based on rl instances from the TSP library, are introduced and computational results for those new instances are reported.  相似文献   
6.
In this paper we present a new stable algorithm for the parallel QR-decomposition of “tall and skinny” matrices. The algorithm has been developed for the dense symmetric eigensolver ELPA, where the QR-decomposition of tall and skinny matrices represents an important substep. Our new approach is based on the fast but unstable CholeskyQR algorithm (Stathopoulos and Wu, 2002) [1]. We show the stability of our new algorithm and provide promising results of our MPI-based implementation on a BlueGene/P and a Power6 system.  相似文献   
7.
This paper presents the original and versatile architecture of a modular neural network and its application to super-resolution. Each module is a small multilayer perceptron, trained with the Levenberg-Marquardt method, and is used as a generic building block. By connecting the modules together to establish a composition of their individual mappings, we elaborate a lattice of modules that implements full connectivity between the pixels of the low-resolution input image and those of the higher-resolution output image. After the network is trained with patterns made up of low and high-resolution images of objects or scenes of the same kind, it will be able to enhance dramatically the resolution of a similar object’s representation. The modular nature of the architecture allows the training phase to be readily parallelized on a network of PCs. Finally, it is shown that the network performs global-scale reconstruction of human faces from very low resolution input images.  相似文献   
8.
In this contribution we present a new CORDIC architecture called ‘semi-flat’ which reduces considerably the latency time and the amount of hardware. In our semi-flat architecture the first rotations are executed with an unfolded scheme but the remaining iterations are flattened using a fast redundant addition tree. Detailed comparisons with other major contributions show that our semi-flat redundant CORDIC is 30% faster and occupy 39% less silicon area.  相似文献   
9.
The numerical computations of temperature and concentration distributions inside a fluidized bed with spray injection in three-dimensions are presented. A continuum model, based on rigorous mass and energy balance equations developed from Nagaiah et al., is used for the three-dimensional simulations. The three-dimensional model equation for nozzle spray is reformulated in comparison to Heinrich. For solving the non-linear partial differential equations with boundary conditions a finite element method is used for space discretization and an implicit Euler method is used for time discretization.The time-dependent behavior of the air humidity, air temperature, degree of wetting, liquid film temperature and particle temperature is presented using two different sets of experimental data. The presented numerical results are validated with the experimental results. Finally, the parallel numerical results are presented using the domain decomposition methods.  相似文献   
10.
Inverse distance weighting (IDW) interpolation and viewshed are two popular algorithms for geospatial analysis.IDW interpolation assigns geographical values to unknown spatial points using values from ...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号