首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9024篇
  免费   0篇
  国内免费   5篇
工业技术   9029篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2015年   1篇
  2014年   208篇
  2013年   165篇
  2012年   758篇
  2011年   2264篇
  2010年   1096篇
  2009年   939篇
  2008年   663篇
  2007年   573篇
  2006年   438篇
  2005年   565篇
  2004年   514篇
  2003年   571篇
  2002年   269篇
  1998年   2篇
排序方式: 共有9029条查询结果,搜索用时 31 毫秒
991.
In the study, a novel segmentation technique is proposed for multispectral satellite image compression. A segmentation decision rule composed of the principal eigenvectors of the image correlation matrix is derived to determine the similarity of image characteristics of two image blocks. Based on the decision rule, we develop an eigenregion-based segmentation technique. The proposed segmentation technique can divide the original image into some proper eigenregions according to their local terrain characteristics. To achieve better compression efficiency, each eigenregion image is then compressed by an efficient compression algorithm eigenregion-based eigensubspace transform (ER-EST). The ER-EST contains 1D eigensubspace transform (EST) and 2D-DCT to decorrelate the data in spectral and spatial domains. Before performing EST, the dimension of transformation matrix of EST is estimated by an information criterion. In this way, the eigenregion image may be approximated by a lower-dimensional components in the eigensubspace. Simulation tests performed on SPOT and Landsat TM images have demonstrated that the proposed compression scheme is suitable for multispectral satellite image.  相似文献   
992.
This paper presents an iterative spectral framework for pairwise clustering and perceptual grouping. Our model is expressed in terms of two sets of parameters. Firstly, there are cluster memberships which represent the affinity of objects to clusters. Secondly, there is a matrix of link weights for pairs of tokens. We adopt a model in which these two sets of variables are governed by a Bernoulli model. We show how the likelihood function resulting from this model may be maximised with respect to both the elements of link-weight matrix and the cluster membership variables. We establish the link between the maximisation of the log-likelihood function and the eigenvectors of the link-weight matrix. This leads us to an algorithm in which we iteratively update the link-weight matrix by repeatedly refining its modal structure. Each iteration of the algorithm is a three-step process. First, we compute a link-weight matrix for each cluster by taking the outer-product of the vectors of current cluster-membership indicators for that cluster. Second, we extract the leading eigenvector from each modal link-weight matrix. Third, we compute a revised link weight matrix by taking the sum of the outer products of the leading eigenvectors of the modal link-weight matrices.  相似文献   
993.
Motion estimation is one of the kernel issues in MPEG series. In this correspondence, a novel two-phase Hilbert-scan-based search algorithm for block motion estimation is presented. First in the intra-phase, a segmentation of the Hilbert curve is applied to the current block, then a novel coarse-to-fine data structure is developed to eliminate the impossible reference blocks in the search window of the reference frame. In the inter-phase, a new prediction scheme for estimating the initial motion vector of the current block is presented. Experimental results reveal that when compared to the GAPD algorithm, our proposed algorithm has better execution time and estimation accuracy performance. Under the same estimation accuracy, our proposed algorithm has better execution time performance when compared to the FS algorithm. In addition, when comparing with the TSS algorithm, our proposed algorithm has better estimation accuracy performance, but has worse execution time performance.  相似文献   
994.
The great diffusion of digital cameras and the widespread use of the internet have produced a mass of digital images depicting a huge variety of subjects, generally acquired by unknown imaging systems under unknown lighting conditions. This makes color balancing, recovery of the color characteristics of the original scene, increasingly difficult. In this paper, we describe a method for detecting and removing a color cast (i.e. a superimposed color due to lighting conditions, or to the characteristics of the capturing device), from a digital photo without any a priori knowledge of its semantic content. First a cast detector, using simple image statistics, classifies the input images as presenting no cast, evident cast, ambiguous cast, a predominant color that must be preserved (such as in underwater images or single color close-ups) or as unclassifiable. A cast remover, a modified version of the white balance algorithm, is then applied in cases of evident or ambiguous cast. The method we propose has been tested with positive results on a data set of some 750 photos.  相似文献   
995.
Human motion analysis is currently one of the most active research topics in computer vision. This paper presents a model-based approach to recovering motion parameters of walking people from monocular image sequences in a CONDENSATION framework. From the semi-automatically acquired training data, we learn a motion model represented as Gaussian distributions, and explore motion constraints by considering the dependency of motion parameters and represent them as conditional distributions. Then both of them are integrated into a dynamic model to concentrate factored sampling in the areas of the state-space with most posterior information. To measure the observation density with accuracy and robustness, a pose evaluation function (PEF) combining both boundary and region information is proposed. The function is modeled with a radial term to improve the efficiency of the factored sampling. We also address the issue of automatic acquisition of initial model pose and recovery from severe failures. A large number of experiments carried out in both indoor and outdoor scenes demonstrate that the proposed approach works well  相似文献   
996.
Tool wear monitoring can be achieved by analyzing the texture of machined surfaces. In this paper, we present the new connectivity oriented fast Hough transform, which easily detects all line segments in binary edge images of textures of machined surfaces. The features extracted from line segments are found to be highly correlated to the level of tool wear. A multilayer perceptron neural network is applied to estimate the flank wear in various machining processes. Our experiments show that this Hough transform based approach is effective in analyzing the quality of machined surfaces and could be used to monitor tool wear. A performance analysis of our Hough transform is also provided.  相似文献   
997.
998.
Many recent tracking algorithms rely on model learning methods. A promising approach consists of modeling the object motion with switching autoregressive models. This article is involved with parametric switching dynamical models governed by an hidden Markov Chain. The maximum likelihood estimation of the parameters of those models is described. The formulas of the EM algorithm are detailed. Moreover, the problem of choosing a good and parsimonious model with BIC criterion is considered. Emphasis is put on choosing a reasonable number of hidden states. Numerical experiments on both simulated and real data sets highlight the ability of this approach to describe properly object motions with sudden changes. The two applications on real data concern object and heart tracking.  相似文献   
999.
1000.
An adaptive fuzzy inference neural network (AFINN) is proposed in this paper. It has self-construction ability, parameter estimation ability and rule extraction ability. The structure of AFINN is formed by the following four phases: (1) initial rule creation, (2) selection of important input elements, (3) identification of the network structure and (4) parameter estimation using LMS (least-mean square) algorithm. When the number of input dimension is large, the conventional fuzzy systems often cannot handle the task correctly because the degree of each rule becomes too small. AFINN solves such a problem by modification of the learning and inference algorithm.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号