首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   300篇
  免费   12篇
工业技术   312篇
  2024年   1篇
  2023年   1篇
  2022年   3篇
  2021年   7篇
  2020年   6篇
  2019年   10篇
  2018年   9篇
  2017年   9篇
  2016年   9篇
  2015年   9篇
  2014年   11篇
  2013年   31篇
  2012年   20篇
  2011年   31篇
  2010年   16篇
  2009年   16篇
  2008年   25篇
  2007年   14篇
  2006年   15篇
  2005年   6篇
  2004年   11篇
  2003年   8篇
  2002年   4篇
  2000年   1篇
  1999年   5篇
  1998年   12篇
  1997年   6篇
  1996年   6篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1985年   1篇
  1983年   1篇
  1977年   1篇
  1976年   1篇
  1948年   1篇
排序方式: 共有312条查询结果,搜索用时 15 毫秒
1.
When domestic wastewater was treated with different onsite applications of buried sand filters and sequencing batch reactors, good organic matter removal was common and effluent BOD7 concentrations from 5 to 20 mg/l were easily achievable. For total nitrogen, effluent concentrations were usually between 20 and 80 mg/l. Good phosphorus removal, even using special adsorption or precipitation materials, was difficult to achieve and large variations occurred. The median effluent concentration of total phosphorus in the most successful sand filter application was less than 0.1 mg/l and other sand filters and SBRs had the median concentrations varying from 1.7 to 6.7 mg/l. These results are based on one year in situ monitoring of 2 conventional buried sand filters, 6 sand filter applications with special phosphorus adsorbing media within the filter bed, 5 sand filters with separate tertiary phosphorus filtration and 11 small SBRs of three different types. The study was carried out in southern Finland during 2003-05. The whole project included monitoring of more than 60 plants of 20 different treatment types or methods, used in normal conditions to treat domestic wastewater. Evaluation of the different systems was made by comparing the measured effluent concentrations. In addition the effluent concentrations were compared to the discharge limits calculated according to the new Finnish regulation.  相似文献   
2.
Recently, mobile TV has been launched in several countries. While mobile TV integrates television contents into mobile phones, the most personal of communication devices, it becomes interesting to know how this feature will be used throughout the day and in varying contexts of everyday life. This paper presents empirical results on the use of mobile TV with different delivery mechanisms and both quantitative and qualitative results on how end-users prefer to use mobile TV contents in different situations. The data is based on ongoing empirical research in Finland in 2006 and 2007. The mobile TV services under study included both news and entertainment contents, and were tested in 3G, DVB-H and Wi-Fi networks using different delivery paradigms: broadcast, on-demand and download. To explore the use of different delivery methods and content consumption, we have developed a mobile TV service protoype, called Podracing. The analysis shows that users appreciated up-to-date information and information-rich media forms and contents especially for mobile news delivery. There was high demand for only the latest news on mobiles. The real-time property was considered important. Most of the users looked at the headlines or followed the news several times a day – much more often than the traditional TV and news prime times would allow.  相似文献   
3.
In a large organization, informal communication and simple backlogs are not sufficient for the management of requirements and development work. Many large organizations are struggling to successfully adopt agile methods, but there is still little scientific knowledge on requirements management in large-scale agile development organizations. We present an in-depth study of an Ericsson telecommunications node development organization which employs a large scale agile method to develop telecommunications system software. We describe how the requirements flow from strategy to release, and related benefits and problems. Data was collected by 43 interviews, which were analyzed qualitatively. The requirements management was done in three different processes, each of which had a different process model, purpose and planning horizon. The release project management process was plan-driven, feature development process was continuous and implementation management process was agile. The perceived benefits included reduced development lead time, increased flexibility, increased planning efficiency, increased developer motivation and improved communication effectiveness. The recognized problems included difficulties in balancing planning effort, overcommitment, insufficient understanding of the development team autonomy, defining the product owner role, balancing team specialization, organizing system-level work and growing technical debt. The study indicates that agile development methods can be successfully employed in organizations where the higher level planning processes are not agile. Combining agile methods with a flexible feature development process can bring many benefits, but large-scale software development seems to require specialist roles and significant coordination effort.  相似文献   
4.
5.
It is shown that the (infinite) tiling problem by Wang tiles is undecidable even if the given tile set is deterministic by all four corners, i.e. a tile is uniquely determined by the colors of any two adjacent edges. The reduction is done from the Turing machine halting problem and uses the aperiodic tile set of Kari and Papasoglu.  相似文献   
6.
2D profile analysis has often in the past been considered sufficient to control the geometrical features of a surface and to ensure that they are compatible with the required functionalities. Yet, experience shows that 3D surface texture analysis is now essential wherever a complete assessment of the surface is required to enable the selection of the most appropriate surface texture to achieve a required functionality.This paper introduces measurement strategies and features considered as essential by SOMICRONIC when designing or developing 3D surface texture measuring instruments, knowing that the pursued aim is to assess the measured surface in a way to reveal the real surface. This is achieved without losing sight of existing ISO standardized concepts concerning surface texture profile analysis.Indeed, essentially for economical reasons, we consider that 31) surface texture stylus instruments must be designed to achieve classical surface texture profiles measurements and characterisations and to fulfil conditions and characteristics listed and described in existing surface texture ISO standards.We propose to develop in the next paragraphs, the following points:
• - Design of the data acquisition unit of a 3D surface texture instrument
• - Strategy for the measurement of the surface
• - Application of these concepts - Examples.
  相似文献   
7.
8.
The neuroimaging community heavily relies on statistical inference to explain measured brain activity given the experimental paradigm. Undeniably, this method has led to many results, but it is limited by the richness of the generative models that are deployed, typically in a mass-univariate way. Such an approach is suboptimal given the high-dimensional and complex spatiotemporal correlation structure of neuroimaging data.Over the recent years, techniques from pattern recognition have brought new insights into where and how information is stored in the brain by prediction of the stimulus or state from the data. Pattern recognition is intrinsically multivariate and the underlying models are data-driven. Moreover, the predictive setting is more powerful for many applications, including clinical diagnosis and brain–computer interfacing. This special issue features a number of papers that identify and tackle remaining challenges in this field. The specific problems at hand constitute opportunities for future research in pattern recognition and neurosciences.  相似文献   
9.
The present aim was to investigate the functionality of a new wireless prototype called Face Interface. The prototype combines the use of voluntary gaze direction and facial muscle activations, for pointing and selecting objects on a computer screen, respectively. The subjective and objective functionality of the prototype was evaluated with a series of pointing tasks using either frowning (i.e., frowning technique) or raising the eyebrows (i.e., raising technique) as the selection technique. Pointing task times and accuracies were measured using three target diameters (i.e., 25, 30, 40 mm), seven pointing distances (i.e., 60, 120, 180, 240, 260, 450, and 520 mm), and eight pointing angles (0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°). The results showed that the raising technique was faster selection technique than the frowning technique for the objects that were presented in the pointing distances from 60 mm to 260 mm. For those pointing distances the overall pointing task times were 2.4 s for the frowning technique, and 1.6 s for the raising technique. Fitts’ law computations showed that the correlations for the Fitts’ law model were r = 0.77 for the frowning technique and r = 0.51 for the raising technique. Further, the index of performance (IP) value was 1.9 bits/s for the frowning technique and 5.4 bits/s for raising the eyebrows technique. Based on the results, the prototype functioned well and was adjustable so that two different facial activations can be used in combination with gaze direction for pointing and selecting objects on a computer screen.  相似文献   
10.
For determining distances (fetch lengths) from points to polygons in a two‐dimensional Euclidean plane, cell‐based algorithms provide a simple and effective solution. They divide the input area into a grid of cells that cover the area. The objects are stored into the appropriate cells, and the resulting structure is used for solving the problem. When the input objects are distributed unevenly or the cell size is small, most of the cells may be empty. The representation is then called sparse. In the method proposed in this work, each cell contains information about its distance to the nonempty cells. It is then possible to skip over several empty cells at a time without memory accesses. A cell‐based fetch length algorithm is implemented on a graphics processing unit (GPU). Because control flow divergence reduces its performance, several methods to reduce the divergence are studied. While many of the explicit attempts turn out to be unsuccessful, sorting of the input data and sparse traversal are observed to greatly improve performance: compared with the initial GPU implementation, up to 45‐fold speedup is reached. The speed improvement is greatest when the map is very sparse and the points are given in a random order. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号