首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   40篇
  免费   1篇
工业技术   41篇
  2023年   3篇
  2022年   5篇
  2021年   1篇
  2020年   5篇
  2019年   1篇
  2018年   1篇
  2017年   2篇
  2016年   3篇
  2015年   1篇
  2014年   1篇
  2013年   1篇
  2012年   1篇
  2011年   2篇
  2010年   1篇
  2009年   3篇
  2008年   6篇
  2007年   1篇
  2006年   1篇
  1994年   1篇
  1973年   1篇
排序方式: 共有41条查询结果,搜索用时 15 毫秒
1.
The Discrete Cosine Transform (DCT) is one of the most widely used techniques for image compression. Several algorithms are proposed to implement the DCT-2D. The scaled SDCT algorithm is an optimization of the DCT-1D, which consists in gathering all the multiplications at the end. In this paper, in addition to the hardware implementation on an FPGA, an extended optimization has been performed by merging the multiplications in the quantization block without having an impact on the image quality. A simplified quantization has been performed also to keep higher the performances of the all chain. Tests using MATLAB environment have shown that our proposed approach produces images with nearly the same quality of the ones obtained using the JPEG standard. FPGA-based implementations of this proposed approach is presented and compared to other state of the art techniques. The target is an an Altera Cyclone II FPGA using the Quartus synthesis tool. Results show that our approach outperforms the other ones in terms of processing-speed, used resources and power consumption. A comparison has been done between this architecture and a distributed arithmetic based architecture.  相似文献   
2.

Background

The use of crowdsourcing in a pedagogically supported form to partner with learners in developing novel content is emerging as a viable approach for engaging students in higher-order learning at scale. However, how students behave in this form of crowdsourcing, referred to as learnersourcing, is still insufficiently explored.

Objectives

To contribute to filling this gap, this study explores how students engage with learnersourcing tasks across a range of course and assessment designs.

Methods

We conducted an exploratory study on trace data of 1279 students across three courses, originating from the use of a learnersourcing environment under different assessment designs. We employed a new methodology from the learning analytics (LA) field that aims to represent students' behaviour through two theoretically-derived latent constructs: learning tactics and the learning strategies built upon them.

Results

The study's results demonstrate students use different tactics and strategies, highlight the association of learnersourcing contexts with the identified learning tactics and strategies, indicate a significant association between the strategies and performance and contribute to the employed method's generalisability by applying it to a new context.

Implications

This study provides an example of how learning analytics methods can be employed towards the development of effective learnersourcing systems and, more broadly, technological educational solutions that support learner-centred and data-driven learning at scale. Findings should inform best practices for integrating learnersourcing activities into course design and shed light on the relevance of tactics and strategies to support teachers in making informed pedagogical decisions.  相似文献   
3.
ABSTRACT

The effect of 2D and 3D educational content learning on memory has been studied using electroencephalography (EEG) brain signal. A hypothesis is set that the 3D materials are better than the 2D materials for learning and memory recall. To test the hypothesis, we proposed a classification system that will predict true or false recall for short-term memory (STM) and long-term memory (LTM) after learning by either 2D or 3D educational contents. For this purpose, EEG brain signals are recorded during learning and testing; the signals are then analysed in the time domain using different types of features in various frequency bands. The features are then fed into a support vector machine (SVM)-based classifier. The experimental results indicate that the learning and memory recall using 2D and 3D contents do not have significant differences for both the STM and the LTM.  相似文献   
4.
The objective of this study was to fabricate dual‐layer hollow fiber as a microreactor for potential syngas production via phase inversion‐based co‐extrusion/cosintering process. As the main challenge of phase inversion is the difficulty to obtain defect‐free fiber, this work focuses on the effect of the fabrication parameters, that is, nonsolvent content, sintering temperature and outer‐layer extrusion rate, on the macrostructure of the produced hollow fiber. SEM images confirm that the addition of nonsolvent has successfully minimized the finger‐like formation. At high sintering temperature, more dense hollow fiber was formed while outer‐layer extrusion rate affects the outer layer thickness.  相似文献   
5.
6.
To acquire a high amount of information of the behaviour of the Homogeneous Charge Compression Ignition (HCCI) auto-ignition process, a reduced surrogate mechanism has been composed out of reduced n-heptane, iso-octane and toluene mechanisms, containing 62 reactions and 49 species. This mechanism has been validated numerically in a 0D HCCI engine code against more detailed mechanisms (inlet temperature varying from 290 to 500 K, the equivalence ratio from 0.2 to 0.7 and the compression ratio from 8 to 18) and experimentally against experimental shock tube and rapid compression machine data from the literature at pressures between 9 and 55 bar and temperatures between 700 and 1400 K for several fuels: the pure compounds n-heptane, iso-octane and toluene as well as binary and ternary mixtures of these compounds. For this validation, stoichiometric mixtures and mixtures with an equivalence ratio of 0.5 are used. The experimental validation is extended by comparing the surrogate mechanism to experimental data from an HCCI engine. A global reaction pathway is proposed for the auto-ignition of a surrogate gasoline, using the surrogate mechanism, in order to show the interactions that the three compounds can have with one another during the auto-ignition of a ternary mixture.  相似文献   
7.
The detection of alcoholism is of great importance due to its effects on individuals and society. Automatic alcoholism detection system (AADS) based on electroencephalogram (EEG) signals is effective, but the design of a robust AADS is a challenging problem. AADS’ current designs are based on conventional, hand-engineered methods and restricted performance. Driven by the excellent deep learning (DL) success in many recognition tasks, we implement an AAD system based on EEG signals using DL. A DL model requires huge number of learnable parameters and also needs a large dataset of EEG signals for training which is not easy to obtain for the AAD problem. In order to solve this problem, we propose a multi-channel Pyramidal neural convolutional (MP-CNN) network that requires a less number of learnable parameters. Using the deep CNN model, we build an AAD system to detect from EEG signal segments whether the subject is alcoholic or normal. We validate the robustness and effectiveness of proposed AADS using KDD, a benchmark dataset for alcoholism detection problem. In order to find the brain region that contributes significant role in AAD, we investigated the effects of selected 19 EEG channels (SC-19), those from the whole brain (ALL-61), and 05 brain regions, i.e., TEMP, OCCIP, CENT, FRONT, and PERI. The results show that SC-19 contributes significant role in AAD with the accuracy of 100%. The comparison reveals that the state-of-the-art systems are outperformed by the AADS. The proposed AADS will be useful in medical diagnosis research and health care systems.  相似文献   
8.
Recently, many researchers have tried to develop a robust, fast, and accurate algorithm. This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking, gaze-based human-computer interaction, medical applications (such as deaf and diabetes patients), and attention analysis. Many real-world conditions challenge the eye appearance, such as illumination, reflections, and occasions. On the other hand, individual differences in eye physiology and other sources of noise, such as contact lenses or make-up. The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications. The proposed circular hough transform with morphing canny edge detection for Pupillometery (CHMCEP) algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy. The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods. Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11, 78.54, 58, and 78 according to Świrski, ExCuSe, Else, and labeled pupils in the wild (LPW) data sets, respectively. These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.  相似文献   
9.
The linear sampling method (LSM) offers a qualitative image reconstruction approach, which is known as a viable alternative for obstacle support identification to the well-studied filtered backprojection (FBP), which depends on a linearized forward scattering model. Of practical interest is the imaging of obstacles from sparse aperture far-field data under a fixed single frequency mode of operation. Under this scenario, the Tikhonov regularization typically applied to LSM produces poor images that fail to capture the obstacle boundary. In this paper, we employ an alternative regularization strategy based on constraining the sparsity of the solution's spatial gradient. Two regularization approaches based on the spatial gradient are developed. A numerical comparison to the FBP demonstrates that the new method's ability to account for aspect-dependent scattering permits more accurate reconstruction of concave obstacles, whereas a comparison to Tikhonov-regularized LSM demonstrates that the proposed approach significantly improves obstacle recovery with sparse-aperture data.  相似文献   
10.
With the development of easy-to-use and sophisticated image editing software, the alteration of the contents of digital images has become very easy to do and hard to detect. A digital image is a very rich source of information and can capture any event perfectly, but because of this reason, its authenticity is questionable. In this paper, a novel passive image forgery detection method is proposed based on local binary pattern (LBP) and discrete cosine transform (DCT) to detect copy–move and splicing forgeries. First, from the chrominance component of the input image, discriminative localized features are extracted by applying 2D DCT in LBP space. Then, support vector machine is used for detection. Experiments carried out on three image forgery benchmark datasets demonstrate the superiority of the method over recent methods in terms of detection accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号