首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   49876篇
  免费   2513篇
  国内免费   155篇
工业技术   52544篇
  2024年   51篇
  2023年   572篇
  2022年   876篇
  2021年   1493篇
  2020年   1074篇
  2019年   1182篇
  2018年   1440篇
  2017年   1421篇
  2016年   1756篇
  2015年   1302篇
  2014年   2094篇
  2013年   3027篇
  2012年   3285篇
  2011年   3918篇
  2010年   2821篇
  2009年   2938篇
  2008年   2823篇
  2007年   2193篇
  2006年   2041篇
  2005年   1731篇
  2004年   1578篇
  2003年   1518篇
  2002年   1330篇
  2001年   1138篇
  2000年   999篇
  1999年   934篇
  1998年   1561篇
  1997年   1002篇
  1996年   803篇
  1995年   559篇
  1994年   462篇
  1993年   407篇
  1992年   290篇
  1991年   274篇
  1990年   259篇
  1989年   241篇
  1988年   206篇
  1987年   168篇
  1986年   119篇
  1985年   115篇
  1984年   93篇
  1983年   63篇
  1982年   38篇
  1981年   39篇
  1980年   31篇
  1979年   31篇
  1978年   32篇
  1977年   40篇
  1976年   62篇
  1973年   20篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Artificial neural networks (ANNs) have been popularly applied for stock market prediction, since they offer superlative learning ability. However, they often result in inconsistent and unpredictable performance in the prediction of noisy financial data due to the problems of determining factors involved in design. Prior studies have suggested genetic algorithm (GA) to mitigate the problems, but most of them are designed to optimize only one or two architectural factors of ANN. With this background, the paper presents a global optimization approach of ANN to predict the stock price index. In this study, GA optimizes multiple architectural factors and feature transformations of ANN to relieve the limitations of the conventional backpropagation algorithm synergistically. Experiments show our proposed model outperforms conventional approaches in the prediction of the stock price index.  相似文献   
992.
Currently, embedded systems have been widely used for ubiquitous computing environments including digital setup boxes, mobile phones, and USN (Ubiquitous Sensor Networks). The significance of security has been growing as it must be necessarily embedded in all these systems. Up until now, many researchers have made efforts to verify the integrity of applied binaries downloaded in embedded systems. The research of problem solving is organized into hardware methods and software-like methods. In this research, the basic approach to solving problems from the software perspective was employed. From the software perspective, unlike in the existing papers (Seshadri et al., Proc. the IEEE symposium on security and privacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals, 2005) based on the standardized model (TTAS.KO-11.0054. 2006) publicized in Korea, there is no extra verifier and conduct for the verification function in the target system. Contrary to the previous schemes (Jung et al. , 2008; Lee et al., LNCS, vol. 4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit, instead of storing signature value for application binary files in the i-node structure for the purpose of reducing run-time execution overhead. Consequently, the proposed scheme is more efficient because it dramatically reduces overhead in storage space, and when it comes to computing, it performs one hash algorithm for initial execution and thereafter compares 1 validation check bit only, instead of signature and hash algorithms for every application binary. Furthermore, in cases where there are frequent changes in the i-node structure or file data depending on the scheme application, the scheme can provide far more effective verification performance compared to the previous schemes.  相似文献   
993.
This paper describes a scheme for proactive human search for a designated person in an undiscovered indoor environment without human operation or intervention. In designing and developing human identification with prior information a new approach that is robust to illumination and distance variations in the indoor environment is proposed. In addition, a substantial exploration method with an octree structure, suitable for path planning in an office configuration, is employed. All these functionalities are integrated in a message- and component-based architecture for the efficient integration and control of the system. This approach is demonstrated by succeeding human search in the challenging robot mission of the 2009 Robot Grand Challenge Contest.  相似文献   
994.
IP traceback is an effective measure to deter internet attacks. A number of techniques have been suggested to realize IP traceback. The Fragment Marking Scheme (FMS) is one of the most promising techniques. However, it suffers a combinatorial explosion when computing the attacker?s location in the presence of multiple attack paths. The Tagged Fragment Marking Scheme (TFMS) has been suggested to suppress the combinatorial explosion by attaching a tag to each IP fragment. Tagging is effective because it allows the victim to differentiate IP fragments belonging to different routers, thereby greatly reducing the search space and finding the correct IP fragments. TFMS, however, increases the number of false positives when the number of routers on the attack path grows beyond some threshold. In this paper, we rigorously analyze the performance of TFMS to determine the correlation between the number of routers and the false positive error rate. Using a probabilistic argument, we determine the formulas for combination counts and error probabilities in terms of the number of routers. Under TFMS, our results show that we can reduce the required time to find an attacker?s location at the cost of a low error rate for a moderate number of routers.  相似文献   
995.
In the blogosphere, there exist posts relevant to a particular subject and blogs that show interest in the subject. In this paper, we define a set of such posts and blogs as a blog community and propose a method for extracting the blog community associated with a particular subject. The proposed method is based on the idea that the blogs who have performed actions (e.g., read, comment, trackback, scrap) to the posts of a particular subject are the ones with interest in the subject, and that the posts that have received actions from such blogs are the ones that contain the subject. The proposed method starts with a small number of manually-selected seed posts containing the subject. Then, the method selects the blogs that have performed actions to the seed posts over some threshold and the posts that have received actions over some threshold. Repeating these two steps gradually expands the blog community. This paper presents various techniques to improve the accuracy of the proposed method. The experimental results show that the proposed method exhibits a higher level of accuracy than the methods proposed in prior research. This paper also discusses business applications of the extracted community, such as target marketing, market monitoring, improving search results, finding power bloggers, and revitalization of the blogosphere.  相似文献   
996.
Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.  相似文献   
997.
Conventional urodynamics systems have been widely used for the assessment of bladder functions. However, they have some drawbacks due to the unfamiliar circumstances for the patient, restrictive position during the test, expense and immovability of the instrument as well as the unphysiological filling of the bladder. To mitigate these problems, we developed a fully ambulatory urodynamics monitoring system, which enables the abdominal pressure to be measured in a non-invasive manner, as well as the manual recording of various events such as the bladder sensations or leakage of urine. Conventional (CMG) and furosemide-stimulated filling cystometry (FCMG) were performed for 28 patients with neurogenic bladders caused by spinal cord injury (24 males and 4 females, age: 49.4 ± 13.9 years, BMI: 23.5 ± 2.4). There were high correlation coefficients (r=0.97 ± 0.02) between the clinical parameters measured by the conventional rectal catheter and those measured by our non-invasive algorithm in the FCMG studies. Also, 10 of the patients (36%) were diagnosed as having different reflexibility of the bladder between conventional CMG and FCMG (p<0.05). In the patients with detrusor overactivity, the average volume and detrusor pressure at bladder sensation in FCMG were lower than those in CMG, while the average compliance was higher (p<0.05). In the patients with areflexic bladders, the number of patients with detrusor overactivity was higher in FCMG and leakage was observed more frequently. These results showed that our system could be a useful additional tool in the clinical assessment of patients in which conventional cystometry failed to explain their symptoms.  相似文献   
998.
This paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors. The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.  相似文献   
999.
This research focuses on the problem of scheduling jobs on a single machine that requires periodic maintenance with the objective of minimizing the number of tardy jobs. We present a two-phase heuristic algorithm in which an initial solution is obtained first with a method modified from Moore's algorithm for the problem without maintenance and then the solution is improved in the second phase. Performance of the proposed heuristic algorithm is evaluated through computational experiments on randomly generated problem instances and results show that the heuristic gives solutions close to those obtained from a commercial integer programming solver in much shorter time and works better than an existing heuristic algorithm in terms of the solution quality.  相似文献   
1000.
Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard because the true correspondence is generally unknown. We overcome this by providing an extensive data set with known spatial correspondence. The data is acquired with a camera mounted on a 6-axis industrial robot providing very accurate camera positioning. Furthermore the scene is scanned with a structured light scanner resulting in precise 3D surface information. In total 60 scenes are depicted ranging from model houses, building material, fruit and vegetables, fabric, printed media and more. Each scene is depicted from 119 camera positions and 19 individual LED illuminations are used for each position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale Harris corner detector performs overall best followed by the Hessian based detectors and the difference of Gaussian (DoG). The methods based on scale space features have an overall better performance than other methods especially when varying the distance to the scene, where especially FAST corner detector, Edge Based Regions (EBR) and Intensity Based Regions (IBR) have a poor performance. The performance of Maximally Stable Extremal Regions (MSER) is moderate. We observe a relatively large decline in performance with both changes in viewpoint and light direction. Some of our observations support previous findings while others contradict these findings.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号