首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   180篇
  免费   12篇
工业技术   192篇
  2024年   1篇
  2023年   3篇
  2022年   16篇
  2021年   17篇
  2020年   10篇
  2019年   9篇
  2018年   7篇
  2017年   9篇
  2016年   9篇
  2015年   6篇
  2014年   6篇
  2013年   10篇
  2012年   14篇
  2011年   11篇
  2010年   9篇
  2009年   9篇
  2008年   8篇
  2007年   5篇
  2006年   8篇
  2005年   7篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2001年   2篇
  2000年   1篇
  1999年   2篇
  1997年   4篇
  1996年   1篇
  1967年   1篇
排序方式: 共有192条查询结果,搜索用时 15 毫秒
41.
42.
Response surface methodology (RSM) was used to optimise alkaline protein extraction from cañihua grain meal as to maximise protein extraction yield. Different factors (temperature, extraction time, solvent/meal ratio, pH and NaCl molar concentration) were screened, and their significant influence on the extracted protein yield was evaluated. The first four factors were selected and studied by RSM using a central composite design (CCD). The obtained model produced a satisfactory fitting of the results (R2 = 0.801). Optimal cañihua protein extraction conditions corresponded to a temperature of 21 °C, time extraction of 5 min, solvent/meal ratio of 37/1 (v/w) at pH 10 resulting in a protein yield of 80.4 ± 1.3%, which closely agree with the predicted value of 81.4%. Moreover, protein degradation was studied via differential scanning calorimetry (DSC) obtaining a denaturation temperature of 93.4 °C and an enthalpy value of 1.22 ± 0.05 J g?1. These results are interesting from a technological point to help in designing an optimal protein extraction process and cañihua food processing strategies.  相似文献   
43.
Acrylamide Mitigation in Potato Chips by Using NaCl   总被引:1,自引:0,他引:1  
In April 2002, Swedish researchers shocked the world when they presented preliminary findings on the presence of acrylamide in fried and baked foods, most notably potato chips and French fries, at levels of 30–2,300 ppb. The objective of this research was to study the effect of immersing potato slices in a NaCl solution over the acrylamide formation in the resultant potato chips. Potato slices (Verdi variety, diameter 40 mm, width 2.0 mm) were fried at 170 °C for 5 min (final moisture content of ∼2.0%). Prior to frying, the potato slices were treated in one of the following ways: (1) control slices (unblanched or raw potato slices); (2) slices blanched at 90 °C for 5 min in water; (3) slices blanched at 90 °C for 5 min plus immersed in a 1 g/100 g NaCl solution at 25 °C for 5 min; (4) slices blanched at 90 °C for 5 min plus immersed in a 3 g/100 g NaCl solution at 25 °C for 5 min; (5) slices blanched at 90 °C for 5 min plus immersed in distilled water at 25 °C for 5 min; and (6) slices blanched at 90 °C for 5 min in a 3 g/100 g NaCl solution. Blanching followed by the immersion of potato slices in 1 g/100 g NaCl solution was effective in reducing acrylamide content in ∼62%; however, almost half of this percentage (∼27%) could be attributed to the effect of NaCl and 35% to the effect of the slight heating treatment during salt immersion step (25 °C for 5 min). Blanching seems to make the NaCl diffusion in potato tissue easier leading to a significant acrylamide reduction in the potato slices after frying.  相似文献   
44.
We present a reference model for finding (prima facie) evidence of discrimination in datasets of historical decision records in socially sensitive tasks, including access to credit, mortgage, insurance, labor market and other benefits. We formalize the process of direct and indirect discrimination discovery in a rule-based framework, by modelling protected-by-law groups, such as minorities or disadvantaged segments, and contexts where discrimination occurs. Classification rules, extracted from the historical records, allow for unveiling contexts of unlawful discrimination, where the degree of burden over protected-by-law groups is evaluated by formalizing existing norms and regulations in terms of quantitative measures. The measures are defined as functions of the contingency table of a classification rule, and their statistical significance is assessed, relying on a large body of statistical inference methods for proportions. Key legal concepts and reasonings are then used to drive the analysis on the set of classification rules, with the aim of discovering patterns of discrimination, either direct or indirect. Analyses of affirmative action, favoritism and argumentation against discrimination allegations are also modelled in the proposed framework. Finally, we present an implementation, called LP2DD, of the overall reference model that integrates induction, through data mining classification rule extraction, and deduction, through a computational logic implementation of the analytical tools. The LP2DD system is put at work on the analysis of a dataset of credit decision records.  相似文献   
45.
There are numerous applications where a variety of human and software participants interactively pursue a given task (play a game, engage in a simulation, etc.). In this paper, we define a basic architecture for a distributed, interactive system (DIS for short). We then formally define a mathematical construct called a DIS abstraction that provides a theoretical basis for a software platform for building distributed interactive systems. Our framework provides a language for building multiagent applications where each agent has its own behaviors and where the behavior of the multiagent application as a whole is governed by one or more “master” agents. Agents in such a multiagent application may compete for resources, may attempt to take actions based on incorrect beliefs, may attempt to take actions that conflict with actions being concurrently attempted by other agents, and so on. Master agents mediate such conflicts. Our language for building agents (ordinary and master) depends critically on a notion called a “generalized constraint” that we define. All agents attempt to optimize an objective function while satisfying such generalized constraints that the agent is bound to preserve. We develop several algorithms to determine how an agent satisfies its generalized constraints in response to events in the multiagent application. We experimentally evaluate these algorithms in an attempt to understand their advantages and disadvantages. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
46.
In this work, biodegradable nanocomposites based on polycaprolactone reinforced with pristine and organo-modified bentonites are prepared by melt extrusion. Bentonite is exchanged with benzalkonium chloride (CBK) in a pilot plant scale reactor. The influence of clay type and loading on morphology, rheology, mechanical properties, and creep performance of the resulting materials is analyzed. Besides, several theoretical models then applied to experimental creep data and master curves are used to relate time and temperature with the compliance of the materials. The morphology characterization of the nanocomposites show that the organo-modification of the clay greatly improves its dispersion in the polymer matrix. As a consequence, it is demonstrated that reinforcement of PCL with 3 wt% loading of organoclay produces the strongest improvement in creep resistance. The instantaneous creep strain and the experimental creep rate decrease more than 9% and 27%, respectively, in the range of temperatures analyzed. Moreover, the experimental values are used to adequately fit theoretical creep models for different clay loadings. On the other hand, the material with optimal creep behavior also shows the greatest improvements in tensile mechanical properties.  相似文献   
47.
The superficial appearance and color of food are the first parameters of quality evaluated by consumers, and are thus critical factors for acceptance of the food item by the consumer. Although there are different color spaces, the most used of these in the measuring of color in food is the L*a*b* color space due to the uniform distribution of colors, and because it is very close to human perception of color. In order to carry out a digital image analysis in food, it is necessary to know the color measure of each pixel on the surface of the food item. However, there are at present no commercial L*a*b* color measures in pixels available because the existing commercial colorimeters generally measure small, non-representative areas of a few square centimeters. Given that RGB digital cameras obtain information in pixels, this article presents a computational solution that allows the obtaining of digital images in L*a*b* color units for each pixel of the digital RGB image. This investigation presents five models for the RGB → L*a*b* conversion and these are: linear, quadratic, gamma, direct, and neural network. Additionally, a method is suggested for estimating the parameters of the models based on a minimization of the mean absolute error between the color measurements obtained by the models, and by a commercial colorimeter for uniform and homogenous surfaces. In the evaluation of the performance of the models, the neural network model stands out with an error of only 0.93%. On the basis of the construction of these models, it is possible to find a L*a*b* color measuring system that is appropriate for an accurate, exacting and detailed characterization of a food item, thus improving quality control and providing a highly useful tool for the food industry based on a color digital camera.  相似文献   
48.
The spatial and temporal variations of pico-, nano- and microphytoplankton abundance and composition were investigated over a 37 month period, focusing on the ecological role of different size classes of phytoplankton, and on the changes of the community structure that might occur during periods when large mucilage macroaggregates appear. Samples were collected monthly from June 1999 to July 2002 at 11 stations, along three transects covering the northern Adriatic basin. Highest abundances were observed in late-winter/spring for microphytoplankton (mainly diatoms), in spring-summer for nanophytoplankton, and in summer for picophytoplankton. The autotrophic component was more abundant in the summers of 2000 and 2002 (when large mucilage aggregates occurred) than in the summers of 1999 and 2001 (when a massive phenomenon was not observed). This increase was statistically significant for pico-, nano- and, among microphytoplankton, only for dinoflagellates. Blooms of picophytoplankton were often observed at the bottom layer during mucilage summers. The microphytoplankton community during mucilage phenomena was characterized by a species composition (Chaetoceros spp., Cerataulina pelagica, Pseudo-nitzschia delicatissima, P. pseudodelicatissima, Cylindrotheca closterium, Dactyliosolen fragilissimus) comparable to that observed in summers without extensive mucilage occurrence. However, some species appeared with significantly higher densities in the summers of 2000 and 2002: Ceratium furca, C. closterium, Oxytoxum spp., Hemiaulus hauckii and Gonyaulax fragilis. Microscopic observation of aggregates revealed that the microphytoplankton species composition inside the aggregates was comparable to that observed in the water column, with an enrichment of opportunistic species such as C. closterium and P. delicatissima. The presence of mucilage aggregates affects the phytoplankton populations in the water column, even when aggregates are at early stages. It seems that there is a mutual relationship between phytoplankton and aggregates, i.e., several diatom and dinoflagellate species may contribute to the aggregate formation and enlargement, but mucilage aggregates themselves may also affect the phytoplankton populations, allowing the development of a rich diatom community and in general enhancing nanophytoplankton growth.  相似文献   
49.
Time-focused clustering of trajectories of moving objects   总被引:5,自引:0,他引:5  
Spatio-temporal, geo-referenced datasets are growing rapidly, and will be more in the near future, due to both technological and social/commercial reasons. From the data mining viewpoint, spatio-temporal trajectory data introduce new dimensions and, correspondingly, novel issues in performing the analysis tasks. In this paper, we consider the clustering problem applied to the trajectory data domain. In particular, we propose an adaptation of a density-based clustering algorithm to trajectory data based on a simple notion of distance between trajectories. Then, a set of experiments on synthesized data is performed in order to test the algorithm and to compare it with other standard clustering approaches. Finally, a new approach to the trajectory clustering problem, called temporal focussing, is sketched, having the aim of exploiting the intrinsic semantics of the temporal dimension to improve the quality of trajectory clustering. The authors are members of the Pisa KDD Laboratory, a joint research initiative of ISTI-CNR and the University of Pisa: .  相似文献   
50.
The proposed FEM model describes natural convective air cooling of cheese curd and cheeses with different sizes, chemical composition and initial temperature, including temperature-dependent functions accounting for the variation of specific heat capacity and thermal conductivity of cheeses. Both the calculated convective heat transfer coefficients (from 3.58 to 15.15 W/m2 K) and the ratio between Grashof and square Reynolds numbers confirmed that heat exchange was natural convective. The model permitted to accurately predict the transient temperature change into the cheese, as shown by the mean RMSEs values (from 0.34 to 2.29 °C). Higher RMSEs values (up to 3.29 °C) were obtained for cheese curds, because some deviations from the assumptions of the model occurred. These higher RMSEs values for cheese curds quantify the importance of compliance with the model’s assumptions to ensure a best fit between the simulated and experimental data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号