首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2393篇
  免费   156篇
  国内免费   18篇
工业技术   2567篇
  2024年   5篇
  2023年   79篇
  2022年   107篇
  2021年   186篇
  2020年   145篇
  2019年   177篇
  2018年   169篇
  2017年   135篇
  2016年   151篇
  2015年   106篇
  2014年   128篇
  2013年   219篇
  2012年   169篇
  2011年   139篇
  2010年   108篇
  2009年   102篇
  2008年   64篇
  2007年   59篇
  2006年   40篇
  2005年   25篇
  2004年   26篇
  2003年   23篇
  2002年   16篇
  2001年   6篇
  2000年   18篇
  1999年   19篇
  1998年   27篇
  1997年   14篇
  1996年   11篇
  1995年   17篇
  1994年   15篇
  1993年   9篇
  1992年   7篇
  1991年   5篇
  1990年   7篇
  1989年   5篇
  1987年   3篇
  1985年   2篇
  1984年   5篇
  1983年   2篇
  1982年   1篇
  1981年   3篇
  1980年   3篇
  1979年   1篇
  1977年   1篇
  1976年   1篇
  1974年   2篇
  1973年   1篇
  1972年   2篇
  1968年   1篇
排序方式: 共有2567条查询结果,搜索用时 261 毫秒
61.
Cell formation is a traditional problem in cellular manufacturing systems that concerns the allocation of parts, operators and machines to the cells. This paper presents a new mathematical programming model for cell formation in which operators’ personality and decision-making styles, skill in working with machines, and also job security are incorporated simultaneously. The model involves the following five objectives: (1) minimising costs of adding new machines to and removing machines from the cells at the beginning of each period, (2) minimising total cost of material handling, (3) maximising job security, (4) minimising inconsistency of operators’ decision styles in cells and (5) minimising cost of suitable skill. On account of the NP-hard nature of the proposed model, NSGA-II as a powerful meta-heuristic approach is used for solving large-sized problems. Furthermore, response surface methodology (RSM) is used for tuning the parameters. Lastly, MOPSO and two scalarization methods are employed for validation of the results obtained. To the best of our knowledge, this is the first study that presents a multi-objective mathematical model for cell formation problem considering operators’ personality and skill, addition and removal of machines and job security.  相似文献   
62.
Electroencephalography (EEG) is widely used in variety of research and clinical applications which includes the localization of active brain sources. Brain source localization provides useful information to understand the brain's behavior and cognitive analysis. Various source localization algorithms have been developed to determine the exact locations of the active brain sources due to which electromagnetic activity is generated in brain. These algorithms are based on digital filtering, 3D imaging, array signal processing and Bayesian approaches. According to the spatial resolution provided, the algorithms are categorized as either low resolution methods or high resolution methods. In this research study, EEG data is collected by providing visual stimulus to healthy subjects. FDM is used for head modelling to solve forward problem. The low‐resolution brain electromagnetic tomography (LORETA) and standardized LORETA (sLORETA) have been used as inverse modelling methods to localize the active regions in the brain during the stimulus provided. The results are produced in the form of MRI images. The tables are also provided to describe the intensity levels for estimated current level for the inverse methods used. The higher current value or intensity level shows the higher electromagnetic activity for a particular source at certain time instant. Thus, the results obtained demonstrate that standardized method which is based on second order Laplacian (sLORETA) in conjunction with finite difference method (FDM) as head modelling technique outperforms other methods in terms of source estimation as it has higher current level and thus, current density (J) for an area as compared to others.  相似文献   
63.
Process capability indices such as Cp are used extensively in manufacturing industries to assess processes in order to decide about purchasing. In practice, the parameter for calculating Cp is rarely known and is frequently replaced with estimates from an in-control reference sample. This article explores the optimal sample size required to achieve a desired error of estimation using absolute percentage error of different Cp estimates. Moreover, some practical tools are created to allow practitioners to find sample size in different situations.  相似文献   
64.
Bone autografts are often used for reconstruction of bone defects; however, due to the limitations of autografts, researchers have been in search of bone substitutes. Dentin is of particular interest for this purpose due to high similarity to bone. This in vitro study sought to assess the surface characteristics and biological properties of dentin samples prepared with different treatments. This study was conducted on regular (RD), demineralized (DemD), and deproteinized (DepD) dentin samples. X-ray diffraction and Fourier transform infrared spectroscopy were used for surface characterization. Samples were immersed in simulated body fluid, and their bioactivity was evaluated under a scanning electron microscope. The methyl thiazol tetrazolium assay, scanning electron microscope analysis and quantitative real-time polymerase chain reaction were performed, respectively to assess viability/proliferation, adhesion/morphology and osteoblast differentiation of cultured human dental pulp stem cells on dentin powders. Of the three dentin samples, DepD showed the highest and RD showed the lowest rate of formation and deposition of hydroxyapatite crystals. Although, the difference in superficial apatite was not significant among samples, functional groups on the surface, however, were more distinct on DepD. At four weeks, hydroxyapatite deposits were noted as needle-shaped accumulations on DemD sample and numerous hexagonal HA deposit masses were seen, covering the surface of DepD. The methyl thiazol tetrazolium, scanning electron microscope, and quantitative real-time polymerase chain reaction analyses during the 10-day cell culture on dentin powders showed the highest cell adhesion and viability and rapid differentiation in DepD. Based on the parameters evaluated in this in vitro study, DepD showed high rate of formation/deposition of hydroxyapatite crystals and adhesion/viability/osteogenic differentiation of human dental pulp stem cells, which may support its osteoinductive/osteoconductive potential for bone regeneration.  相似文献   
65.
With the increasing and rapid growth rate of COVID-19 cases, the healthcare scheme of several developed countries have reached the point of collapse. An important and critical steps in fighting against COVID-19 is powerful screening of diseased patients, in such a way that positive patient can be treated and isolated. A chest radiology image-based diagnosis scheme might have several benefits over traditional approach. The accomplishment of artificial intelligence (AI) based techniques in automated diagnoses in the healthcare sector and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and recognition systems. This study develops an Intelligent Firefly Algorithm Deep Transfer Learning Based COVID-19 Monitoring System (IFFA-DTLMS). The proposed IFFA-DTLMS model majorly aims at identifying and categorizing the occurrence of COVID19 on chest radiographs. To attain this, the presented IFFA-DTLMS model primarily applies densely connected networks (DenseNet121) model to generate a collection of feature vectors. In addition, the firefly algorithm (FFA) is applied for the hyper parameter optimization of DenseNet121 model. Moreover, autoencoder-long short term memory (AE-LSTM) model is exploited for the classification and identification of COVID19. For ensuring the enhanced performance of the IFFA-DTLMS model, a wide-ranging experiments were performed and the results are reviewed under distinctive aspects. The experimental value reports the betterment of IFFA-DTLMS model over recent approaches.  相似文献   
66.
This research proposes a machine learning approach using fuzzy logic to build an information retrieval system for the next crop rotation. In case-based reasoning systems, case representation is critical, and thus, researchers have thoroughly investigated textual, attribute-value pair, and ontological representations. As big databases result in slow case retrieval, this research suggests a fast case retrieval strategy based on an associated representation, so that, cases are interrelated in both either similar or dissimilar cases. As soon as a new case is recorded, it is compared to prior data to find a relative match. The proposed method is worked on the number of cases and retrieval accuracy between the related case representation and conventional approaches. Hierarchical Long Short-Term Memory (HLSTM) is used to evaluate the efficiency, similarity of the models, and fuzzy rules are applied to predict the environmental condition and soil quality during a particular time of the year. Based on the results, the proposed approaches allows for rapid case retrieval with high accuracy.  相似文献   
67.
One of the most pressing concerns for the consumer market is the detection of adulteration in meat products due to their preciousness. The rapid and accurate identification mechanism for lard adulteration in meat products is highly necessary, for developing a mechanism trusted by consumers and that can be used to make a definitive diagnosis. Fourier Transform Infrared Spectroscopy (FTIR) is used in this work to identify lard adulteration in cow, lamb, and chicken samples. A simplified extraction method was implied to obtain the lipids from pure and adulterated meat. Adulterated samples were obtained by mixing lard with chicken, lamb, and beef with different concentrations (10%–50% v/v). Principal component analysis (PCA) and partial least square (PLS) were used to develop a calibration model at 800–3500 cm−1. Three-dimension PCA was successfully used by dividing the spectrum in three regions to classify lard meat adulteration in chicken, lamb, and beef samples. The corresponding FTIR peaks for the lard have been observed at 1159.6, 1743.4, 2853.1, and 2922.5 cm−1, which differentiate chicken, lamb, and beef samples. The wavenumbers offer the highest determination coefficient R2 value of 0.846 and lowest root mean square error of calibration (RMSEC) and root mean square error prediction (RMSEP) with an accuracy of 84.6%. Even the tiniest fat adulteration up to 10% can be reliably discovered using this methodology.  相似文献   
68.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
69.
Magnetic resonance imaging (MRI) brain tumor segmentation is a crucial task for clinical treatment. However, it is challenging owing to variations in type, size, and location of tumors. In addition, anatomical variation in individuals, intensity non-uniformity, and noises adversely affect brain tumor segmentation. To address these challenges, an automatic region-based brain tumor segmentation approach is presented in this paper which combines fuzzy shape prior term and deep learning. We define a new energy function in which an Adaptively Regularized Kernel-Based Fuzzy C-Means (ARKFCM) Clustering algorithm is utilized for inferring the shape of the tumor to be embedded into the level set method. In this way, some shortcomings of traditional level set methods such as contour leakage and shrinkage have been eliminated. Moreover, a fully automated method is achieved by using U-Net to obtain the initial contour, reducing sensitivity to initial contour selection. The proposed method is validated on the BraTS 2017 benchmark dataset for brain tumor segmentation. Average values of Dice, Jaccard, Sensitivity and specificity are 0.93 ± 0.03, 0.86 ± 0.06, 0.95 ± 0.04, and 0.99 ± 0.003, respectively. Experimental results indicate that the proposed method outperforms the other state-of-the-art methods in brain tumor segmentation.  相似文献   
70.
The automatic recognition of dialogue act is a task of crucial importance for the processing of natural language dialogue at discourse level. It is also one of the most challenging problems as most often the dialogue act is not expressed directly in speaker’s utterance. In this paper, a new cue-based model for dialogue act recognition is presented. The model is, essentially, a dynamic Bayesian network induced from manually annotated dialogue corpus via dynamic Bayesian machine learning algorithms. Furthermore, the dynamic Bayesian network’s random variables are constituted from sets of lexical cues selected automatically by means of a variable length genetic algorithm, developed specifically for this purpose. To evaluate the proposed approaches of design, three stages of experiments have been conducted. In the initial stage, the dynamic Bayesian network model is constructed using sets of lexical cues selected manually from the dialogue corpus. The model is evaluated against two previously proposed models and the results confirm the potentiality of dynamic Bayesian networks for dialogue act recognition. In the second stage, the developed variable length genetic algorithm is used to select different sets of lexical cues to constitute the dynamic Bayesian networks’ random variables. The developed approach is evaluated against some of the previously used ranking approaches and the results provide experimental evidences on its ability to avoid the drawbacks of the ranking approaches. In the third stage, the dynamic Bayesian networks model is constructed using random variables constituted from the sets of lexical cues generated in the second stage and the results confirm the effectiveness of the proposed approaches for designing dialogue act recognition model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号