首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Eliciting requirements for a proposed system inevitably involves the problem of handling undesirable information about customer's needs, including inconsistency, vagueness, redundancy, or incompleteness. We term the requirements statements involved in the undesirable information non-canonical software requirements. In this paper, we propose an approach to handling non-canonical software requirements based on Annotated Predicate Calculus (APC). Informally, by defining a special belief lattice appropriate for representing the stakeholder's belief in requirements statements, we construct a new form of APC to formalize requirements specifications. We then show how the APC can be employed to characterize non-canonical requirements. Finally, we show how the approach can be used to handle non-canonical requirements through a case study. Kedian Mu received B.Sc. degree in applied mathematics from Beijing Institute of Technology, Beijing, China, in 1997, M.Sc. degree in probability and mathematical statistics from Beijing Institute of Technology, Beijing, China, in 2000, and Ph.D. in applied mathematics from Peking University, Beijing, China, in 2003. From 2003 to 2005, he was a postdoctoral researcher at Institute of Computing Technology, Chinese Academy of Sciences, China. He is currently an assistant professor at School of Mathematical Sciences, Peking University, Beijing, China. His research interests include uncertain reasoning in artificial intelligence, knowledge engineering and science, and requirements engineering. Zhi Jin was awarded B.Sc. in computer science from Zhejiang University, Hangzhou, China, in 1984, and studied for her M.Sc. in computer science (expert system) and her Ph.D. in computer science (artificial intelligence) at National Defence University of Technology, Changsha, China. She was awarded Ph.D. in 1992. She is a senior member of China Computer Federation. She is currently a professor at Academy of Mathematics and System Sciences, Chinese Academy of Science. Her research interests include knowledge-based systems, artificial intelligence, requirements engineering, ontology engineering, etc. Her current research focuses on ontology-based requirements elicitation and analysis. She has got about 60 papers published, including co-authoring one book. Ruqian Lu is a professor of computer science of the Institute of Mathematics, Chinese Academy of Sciences. His research interests include artificial intelligence, knowledge engineering and knowledge based software engineering. He designed the “Tian Ma” software systems that have been widely applied in more than 20 fields, including the national defense and the economy. He has won two first class awards from Chinese Academy of Sciences and a National second class prize from the Ministry of Science and Technology. He has also won the sixth Hua Lookeng Prize for Mathematics. Yan Peng received B.Sc. degree in software from Jilin University, Changchun, China, in 1992. From June 2002 to December 2005, he studied for his M.E. in software engineering at College of Software Engineering, Graduate School of Chinese Academy of Sciences, Beijing, China. He was awarded M.E degree in 2006. He is currently responsible for CRM (customer relationship management) and BI (business intelligence) project in the BONG. His research interests include customer relationship management, business intelligence, data ming, software engineering and requirements engineering.  相似文献   

2.
3.
This paper introduces the design and implemetation of BCL-3,a high performance low-level communication software running on a cluster of SMPs(CLUMPS) called DAWNING-3000,BCL-3 provides flexible and sufficient functionality to fulfill the communication requirements of fundamental system software developed for DAWNING-3000 while guaranteeing security,scalability,and reliability,Important features of BCL-3 are presented in the paper,including special support for SMP and heterogeneous network environment,semiuser-level communication,reliable and ordered data transfer and scalable flow control,The performance evaluation of BCL-3 over Myrinet is also given.  相似文献   

4.
Hardware and software co-design is a design technique which delivers computer systems comprising hardware and software components.A critical phase of the co-design process is to decompose a program into hardware and software .This paper proposes an algebraic partitioning algorithm whose correctness is verified in program algebra.The authors inroduce a program analysis phase before program partitioning and deveop a collection of syntax-based splitting rules.The former provides the information for moving operations from software to hardware and reducing the interaction between compoents,and th latter supports a compositional approach to program partitioning.  相似文献   

5.
Electronic commerce is an important application that has evolved significantly recently. However, electronic commerce systems are complex and difficult to be correctly designed. Guaranteeing the correctness of an e-commerce system is not an easy task due to the great amount of scenarios where errors occur, many of them very subtle. In this work we presents a methodology that uses formal-method techniques, specifically symbolic model checking, to design electronic commerce applications and to automatically verify them. Also, a model checking pattern hierarchy has been developed—it specifies patterns to construct and verify the formal model of e-commerce systems. We consider this research the first step to the development of a framework, which will integrate the methodology, an e-commerce specification language based on business rules, and a model checker. Adriano Pereira received the B.S. and M.S. degrees in computer science in 2000 and 2002, respectively, and he is currently pursuing the Ph.D. degree in computer science from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His current interests are on performance analysis and modeling of e-business and distributed systems, and formal methods. Mark Song received the B.S., M.S. and Ph.D. degrees in computer science from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His current interests are on distributed systems and formal methods – especially BMC (Bounded Model Checking). Gustavo Franco received the B.S. and M.S. degrees in computer science in 2001 and 2004, respectively, from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His research was on modeling the user behavior of e-business and distributed systems, and formal methods. Actually his current interests are on software engeneering and project management of IT projects.  相似文献   

6.
In the part 2 of advanced Audio Video coding Standard (AVS-P2), many efficient coding tools are adopted in motion compensation, such as new motion vector prediction, symmetric matching, quarter precision interpolation, etc. However, these new features enormously increase the computational complexity and the memory bandwidth requirement, which make motion compensation a difficult component in the implementation of the AVS HDTV decoder. This paper proposes an efficient motion compensation architecture for AVS-P2 video standard up to the Level 6.2 of the Jizhun Profile. It has a macroblock-level pipelined structure which consists of MV predictor unit, reference fetch unit and pixel interpolation unit. The proposed architecture exploits the parallelism in the AVS motion compensation algorithm to accelerate the speed of operations and uses the dedicated design to optimize the memory access. And it has been integrated in a prototype chip which is fabricated with TSMC 0.18-#m CMOS technology, and the experimental results show that this architecture can achieve the real time AVS-P2 decoding for the HDTV 1080i (1920 - 1088 4 : 2 : 0 60field/s) video. The efficient design can work at the frequency of 148.5MHz and the total gate count is about 225K.  相似文献   

7.
A major overhead in software DSM(Distributed Shared Memory)is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing.This paper introduces a dynamic prefetching method implemented in the JIAJIA software DSM to reduce system overhead caused by remote accesses.The prefetching method records the interleaving string of INV(invalidation)and GETP (getting a remote page)operations for each cached page and analyzes the periodicity of the string when a page is invalidated on a lock or barrier.A prefetching request is issued after the lock or barrier if the periodicity analysis indicates that GETP will be the next operation in the string.Multiple prefetching requests are merged into the same message if they are to the same host,Performance evaluation with eight well-accepted benchmarks in a cluster of sixteen PowerPC workstations shows that the prefetching scheme can significantly reduce the page fault overhead and as a result achieves a performance increase of 15%-20% in three benchmarks and around 8%-10% in another three.The average extra traffic caused by useless prefetches is only 7%-13% in the evaluation.  相似文献   

8.
In this paper, a facial animation system is proposed for capturing both geometrical information and illumination changes of surface details, called expression details, from video clips simultaneously, and the capture ddata can be widely applied to different 2D face images and 3D face models. While tracking the geometric data, we record the expression details by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. Because a ratio image is obtained via dividing colors of an expressive face by those of a neutral face, pixels with ratio value smaller than one are where a wrinkle or crease appears. The refore, thegradients of the ratio value at each pixel in ratio images are regarded as changes of a face surface, and original normals on the surface can be adjusted according to these gradients. Based on this idea, we can convert the ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulted facial animations are more life-like and more expressive.  相似文献   

9.
Fingerprint recognition is based on minutiae matching. The matching correctness of the fingerprints is due to the effect of the accuracy of the minutiae. Fingerprint enhancement and postprocessing are used to reduce the false minutiae. In this paper, we propose methods on fingerprint enhancement and postprocessing, based on the directional fields of a fingerprint. We directly enhance the fingerprint on a gray-scale image and reduce most false minutiae in the postprocessing step. The achieved results are compared with other methods, and the reduction of false minutiae and the recovery of dropped minutiae are improved. The text was submitted by the authors in English. Gwo-Cheng Chao was born in Dasi, Taoyuan, Taiwan, in 1978. He received MS degrees in computer science and information engineering from Taiwan University of Science and Technology, Taiwan, in 2004. He is currently pursuing a PhD degree in networking and multimedia at National Taiwan University, Taipei, Taiwan. His research interests include pattern recognition, image processing, computer vision, biometrics, computer graphics, and multimedia systems. Shung-Shing Lee received BS and MS degrees in electronic engineering and a PhD degree in electrical engineering in 1980, 1987, and 1996, respectively, all from National Taiwan Institute of Technology, Taipei, Taiwan. Currently, he is an associate professor in the Department of Electrical Engineering, Ching Yun University, Jung-Li, Taiwan. His research interests include image processing, biometrics, embedded system design, SOPC, parallel computing, and parallel algorithms. Hung-Chuan Lai received his MS degree in computer science and information engineering from Chung-Hua University, Hsinchu, Taiwan, in 2002. He is currently pursuing a PhD degree at National Taiwan University of Science and Technology, Taipei, Taiwan. His research interests include image processing, VLSI, fault tolerance architecture, embedded system design, data compression, computer architecture and organization, and biometrics.  相似文献   

10.
Software configuration management(SCM)is an important key technology in software development.Component-based software development (CBSD)is an emerging paradigm in software development.However,to apply CBSD effectively in real world practice,supporting SCM in CBSD needs to be further investigated.In this paper,the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented.In this model,Components,as the integral logical constituents in a system,are managed as the basic configuration items in SCM,and the relationships between/among components are defined and maintained.Based on this model.a configuration management system is implemented.  相似文献   

11.
12.
13.
Parallelizing compilers have made great progress in recent years.However,there still remains a gap between the current ability of parallelizing compilers and their final goals.In order to achieve the maximum,parallelism,run-time techniques were used in parallelizing compilers during last few years.First,this paper presents a basic run-time prviation method.The definition of run-time dead code,backward data-flow information must be used.Proteus Test,which can use backward information in run-time,is then presented to exploit more dynamic parallelism.Also.a variation of Protus Test,the Advanced Proteus Test,is offered to achieve partial parallelism.Proteus Test was implemented on the parallelizing compiler AFT.In the end of this paper the program fppp.f of Spec95fp Benchmark is taken as an example,to show the effectiveness of Proteus Test.  相似文献   

14.
In this paper,a new effective method is proposed to find class association rules (CAR),to get useful class associaiton rules(UCAR)by removing the spurious class association rules (SCAR),and to generate exception class associaiton rules(ECAR)for each UCAR.CAR mining,which integrates the techniques of classification and association,is of great interest recently.However,it has two drawbacks:one is that a large part of CARs are spurious and maybe misleading to users ;the other is that some important ECARs are diffcult to find using traditional data mining techniques .The method introduced in this paper aims to get over these flaws.According to our approach,a user can retrieve correct information from UCARs and konw the influence from different conditions by checking corresponding ECARs.Experimental results demonstrate the effectiveness of our proposed approach.  相似文献   

15.
Error Analysis for Image Inpainting   总被引:1,自引:0,他引:1  
Image inpainting refers to restoring a damaged image with missing information. In recent years, there have been many developments on computational approaches to image inpainting problem [2, 4, 6, 9, 11–13, 27, 28]. While there are many effective algorithms available, there is still a lack of theoretical understanding on under what conditions these algorithms work well. In this paper, we take a step in this direction. We investigate an error bound for inpainting methods, by considering different image spaces such as smooth images, piecewise constant images and a particular kind of piecewise continuous images. Numerical results are presented to validate the theoretical error bounds. Tony F. Chan received the B.S. degree in engineering and the M.S. degree in aerospace engineering in 1973, from the California Institute of Technology, and the Ph.D. degree in computer science from Stanford University in 1978. He is Professor of Mathematics and currently also Dean of the division of Physical science at University of California, Los Angeles, where he has been a Professor since 1986. His research interests include mathematical and computational methods in image processing, multigrid, domain decomposition algorithms, iterative methods, Krylov subspace methods, and parallel algorithms. Sung Ha Kang received the Ph.D. degree in mathematics in 2002, from University of California, Los Angeles, and currently is Assistant Professor of Mathematics at University of Kentucky since 2002. Her research interests include mathematical and computational methods in image processing and computer vision.  相似文献   

16.
The pairwise attribute noise detection algorithm   总被引:1,自引:3,他引:1  
Analyzing the quality of data prior to constructing data mining models is emerging as an important issue. Algorithms for identifying noise in a given data set can provide a good measure of data quality. Considerable attention has been devoted to detecting class noise or labeling errors. In contrast, limited research work has been devoted to detecting instances with attribute noise, in part due to the difficulty of the problem. We present a novel approach for detecting instances with attribute noise and demonstrate its usefulness with case studies using two different real-world software measurement data sets. Our approach, called Pairwise Attribute Noise Detection Algorithm (PANDA), is compared with a nearest neighbor, distance-based outlier detection technique (denoted DM) investigated in related literature. Since what constitutes noise is domain specific, our case studies uses a software engineering expert to inspect the instances identified by the two approaches to determine whether they actually contain noise. It is shown that PANDA provides better noise detection performance than the DM algorithm. Jason Van Hulse is a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include data mining and knowledge discovery, machine learning, computational intelligence and statistics. He is a student member of the IEEE and IEEE Computer Society. He received the M.A. degree in mathematics from Stony Brook University in 2000, and is currently Director, Decision Science at First Data Corporation. Taghi M. Khoshgoftaar is a professor at the Department of Computer Science and Engineering, Florida Atlantic University, and the director of the Empirical Software Engineering and Data Mining and Machine Learning Laboratories. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, machine learning, and statistical modeling. He has published more than 300 refereed papers in these subjects. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the IEEE, the IEEE Computer Society, and IEEE Reliability Society. He served as the program chair and general chair of the IEEE International Conference on Tools with Artificial Intelligence in 2004 and 2005, respectively. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Haiying Huang received the M.S. degree in computer engineeringfrom Florida Atlantic University, Boca Raton, Florida, USA, in 2002. She is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. Her research interests include software engineering, computational intelligence, data mining, software measurement, software reliability, and quality engineering.  相似文献   

17.
The need to improve software productivity and software quality has put forward the research on software metrics technology and the development of software metrics tool to support related activities.To support object-oriented software metrics practice efectively,a model-absed approach to object-oriented software metrics is proposed in this paper.This approach guides the metrics users to adopt the quality metrics model to measure the object-oriented software products .The development of the model can be achieved by using a top-down approach.This approach explicitly proposes the conception of absolute normalization computation and relative normalization computation for a metrics model.Moreover,a generic software metrics tool-Jade Bird Object-Oriented Metrics Tool(JBOOMT)is designed to implement this approach.The parser-based approach adopted by the tool makes the information of the source program accurate and complete for measurement.It supports various customizable hierarchical metrics models and provides a flexible user interface for users to manipulate the models.It also supports absolute and relative normalization mechanisms in different situations.  相似文献   

18.
Classification is an important technique in data mining.The decision trees builty by most of the existing classification algorithms commonly feature over-branching,which will lead to poor efficiency in the subsequent classification period.In this paper,we present a new value-oriented classification method,which aims at building accurately proper-sized decision trees while reducing over-branching as much as possible,based on the concepts of frequent-pattern-node and exceptive-child-node.The experiments show that while using relevant anal-ysis as pre-processing ,our classification method,without loss of accuracy,can eliminate the over-branching greatly in decision trees more effectively and efficiently than other algorithms do.  相似文献   

19.
This paper introduces a new algorithm of mining association rules.The algorithm RP counts the itemsets with different sizes in the same pass of scanning over the database by dividing the database into m partitions.The total number of pa sses over the database is only(k 2m-2)/m,where k is the longest size in the itemsets.It is much less than k .  相似文献   

20.
A Novel Computer Architecture to Prevent Destruction by Viruses   总被引:1,自引:0,他引:1       下载免费PDF全文
In today‘s Internet computing world,illegal activities by crackers pose a serious threat to computer security.It is well known that computer viruses,Trojan horses and other intrusive programs may cause sever and often catastrophic consequences. This paper proposes a novel secure computer architecture based on security-code.Every instruction/data word is added with a security-code denoting its security level.External programs and data are automatically addoed with security-code by hadware when entering a computer system.Instruction with lower security-code cannot run or process instruction/data with higher security level.Security-code cannot be modified by normal instruction.With minor hardware overhead,then new architecture can effectively protect the main computer system from destruction or theft by intrusive programs such as computer viruses.For most PC systems it includes an increase of word-length by 1 bit on register,the memory and the hard disk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号