首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A critical problem in software development is the monitoring, control and improvement in the processes of software developers. Software processes are often not explicitly modeled, and manuals to support the development work contain abstract guidelines and procedures. Consequently, there are huge differences between ‘actual’ and ‘official’ processes: “the actual process is what you do, with all its omissions, mistakes, and oversights. The official process is what the book, i.e., a quality manual, says you are supposed to do” (Humphrey in A discipline for software engineering. Addison-Wesley, New York, 1995). Software developers lack support to identify, analyze and better understand their processes. Consequently, process improvements are often not based on an in-depth understanding of the ‘actual’ processes, but on organization-wide improvement programs or ad hoc initiatives of individual developers. In this paper, we show that, based on particular data from software development projects, the underlying software development processes can be extracted and that automatically more realistic process models can be constructed. This is called software process mining (Rubin et al. in Process mining framework for software processes. Software process dynamics and agility. Springer Berlin, Heidelberg, 2007). The goal of process mining is to better understand the development processes, to compare constructed process models with the ‘official’ guidelines and procedures in quality manuals and, subsequently, to improve development processes. This paper reports on process mining case studies in a large industrial company in The Netherlands. The subject of the process mining is a particular process: the change control board (CCB) process. The results of process mining are fed back to practice in order to subsequently improve the CCB process.  相似文献   

2.
With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. However, fitting spatial models often involves expensive matrix decompositions, whose computational complexity increases in cubic order with the number of spatial locations. This situation is aggravated in Bayesian settings where such computations are required once at every iteration of the Markov chain Monte Carlo (MCMC) algorithms. In this paper, we describe the use of Variational Bayesian (VB) methods as an alternative to MCMC to approximate the posterior distributions of complex spatial models. Variational methods, which have been used extensively in Bayesian machine learning for several years, provide a lower bound on the marginal likelihood, which can be computed efficiently. We provide results for the variational updates in several models especially emphasizing their use in multivariate spatial analysis. We demonstrate estimation and model comparisons from VB methods by using simulated data as well as environmental data sets and compare them with inference from MCMC.  相似文献   

3.
Biomedical simulations are often dependent on numerical approximation methods, including finite element, finite difference, and finite volume methods, to model the varied phenomena of interest. An important requirement of the numerical approximation methods above is the need to create a discrete decomposition of the model geometry into a ‘mesh’. Historically, the generation of these meshes has been a critical bottleneck in efforts to efficiently generate biomedical simulations which can be utilized in understanding, planning, and diagnosing biomedical conditions. In this paper we discuss a methodology for generating hexahedral meshes for biomedical models using an algorithm implemented in the SCIRun Problem Solving Environment. The method is flexible and can be utilized to build up conformal hexahedral meshes ranging from models defined by single isosurfaces to more complex geometries with multi-surface boundaries.  相似文献   

4.
Maurice Berix 《AI & Society》2012,27(1):165-172
Engaging the public in decision-making processes is commonly accepted as an effective strategy for a better policy making, a better policy support and for narrowing the gap between government and the public. In today’s digitised society, participation via online media is becoming more important. But is this so-called e-participation being used optimally? Or is a better design possible? In my opinion, the answer to these questions is a ‘yes’. Despite numerous efforts in engaging the public with policy deliberation, the actual amount of participants remains low. In this article, I have used the YUTPA model (Nevejan 2009) to analyse some existing e-participation projects. Additionally, I derived ten characteristics of ‘play’ to make proposals for a more designerly e-participation approach.  相似文献   

5.
This study provides a step further in the computation of the transition path of a continuous time endogenous growth model discussed by Privileggi (Nonlinear dynamics in economics, finance and social sciences: essays in honour of John Barkley Rosser Jr., Springer, Berlin, Heidelberg, pp. 251–278, 2010)—based on the setting first introduced by Tsur and Zemel (J Econ Dyn Control 31:3459–3477, 2007)—in which knowledge evolves according to the Weitzman (Q J Econ 113:331–360, 1998) recombinant process. A projection method, based on the least squares of the residual function corresponding to the ODE defining the optimal policy of the ‘detrended’ model, allows for the numeric approximation of such policy for a positive Lebesgue measure range of values of the efficiency parameter characterizing the probability function of the recombinant process. Although the projection method’s performance rapidly degenerates as one departs from a benchmark value for the efficiency parameter, we are able to numerically compute time-path trajectories which are sufficiently regular to allow for sensitivity analysis under changes in parameters’ values.  相似文献   

6.
We study the mathematical modeling and numerical simulation of the motion of red blood cells (RBC) and vesicles subject to an external incompressible flow in a microchannel. RBC and vesicles are viscoelastic bodies consisting of a deformable elastic membrane enclosing an incompressible fluid. We provide an extension of the finite element immersed boundary method by Boffi and Gastaldi (Comput Struct 81:491–501, 2003), Boffi et al. (Math Mod Meth Appl Sci 17:1479–1505, 2007), Boffi et al. (Comput Struct 85:775–783, 2007) based on a model for the membrane that additionally accounts for bending energy and also consider inflow/outflow conditions for the external fluid flow. The stability analysis requires both the approximation of the membrane by cubic splines (instead of linear splines without bending energy) and an upper bound on the inflow velocity. In the fully discrete case, the resulting CFL-type condition on the time step size is also more restrictive. We perform numerical simulations for various scenarios including the tank treading motion of vesicles in microchannels, the behavior of ‘healthy’ and ‘sick’ RBC which differ by their stiffness, and the motion of RBC through thin capillaries. The simulation results are in very good agreement with experimentally available data.  相似文献   

7.
Learning models for detecting and classifying object categories is a challenging problem in machine vision. While discriminative approaches to learning and classification have, in principle, superior performance, generative approaches provide many useful features, one of which is the ability to naturally establish explicit correspondence between model components and scene features—this, in turn, allows for the handling of missing data and unsupervised learning in clutter. We explore a hybrid generative/discriminative approach, using ‘Fisher Kernels’ (Jaakola, T., et al. in Advances in neural information processing systems, Vol. 11, pp. 487–493, 1999), which retains most of the desirable properties of generative methods, while increasing the classification performance through a discriminative setting. Our experiments, conducted on a number of popular benchmarks, show strong performance improvements over the corresponding generative approach. In addition, we demonstrate how this hybrid learning paradigm can be extended to address several outstanding challenges within computer vision including how to combine multiple object models and learning with unlabeled data.  相似文献   

8.
Transaction-level modeling is used in hardware design for describing designs at a higher level compared to the register-transfer level (RTL) (e.g. Cai and Gajski in CODES+ISSS ’03: proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, pp. 19–24, 2003; Chen et al. in FMCAD ’07: proceedings of the formal methods in computer aided design, pp. 53–61, 2007; Mahajan et al. in MEMOCODE ’07: proceedings of the 5th IEEE/ACM international conference on formal methods and models for codesign, pp. 123–132, 2007; Swan in DAC ’06: proceedings of the 43rd annual conference on design automation, pp. 90–92, 2006). Each transaction represents a unit of work, which is also a useful unit for design verification. In such models, there are many properties of interest which involve interactions between multiple transactions. Examples of this are ordering relationships in sequential processing and hazard checking in pipelined circuits. Writing such properties on the RTL design requires significant expertise in understanding the higher-level computation being done in a given RTL design and possible instrumentation of the RTL to express the property of interest. This is a barrier to the easy use of such properties in RTL designs.  相似文献   

9.
With the rise of ubiquitous computing in recent years, concepts of spatiality have become a significant topic of discussion in design and development of multimedia systems. This article investigates spatial practices at the intersection of youth, technology, and urban space in Seoul, and examines what the author calls ‘transyouth’: in the South Korean context, these people are between the ages of 18 and 24, situated on the delicate border between digital natives and immigrants in Prensky’s [46] terms. In the first section, the article sets out the technosocial environment of contemporary Seoul. This is followed by a discussion of social networking processes derived from semi-structured interviews conducted in 2007–2008 with Seoul transyouth about their ‘lived experiences of the city.’ Interviewees reported how they interact to play, work, and live with and within the city’s unique environment. The article develops a theme of how technosocial convergence (re)creates urban environments and argues for a need to consider such user-driven spatial recreation in designing cities as (ubiquitous) urban networks in recognition of its changing technosocial contours of connections. This is explored in three spaces of different scales: Cyworld as an online social networking space; cocoon housing—a form of individual residential space which is growing rapidly in many Korean cities—as a private living space; and ubiquitous City as the future macro-space of Seoul.  相似文献   

10.
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.  相似文献   

11.
In recent macro models with staggered price and wage settings, the presence of variables such as relative price and wage dispersion is prevalent, which leads to the source of bifurcations. In this paper, we illustrate how to detect the existence of a bifurcation in stylized macroeconomic models with Calvo (J Monet Econ 12(3):383–398, 1983) pricing. Following the general approach of Judd (Numerical methods in economics, 1998), we employ l’Hospital’s rule to characterize the first-order dynamics of relative price distortion in terms of its higher-order derivatives. We also show that, as in the usual practice in the literature, the bifurcation can be eliminated through renormalization of model variables. Furthermore, we demonstrate that the second-order approximate solutions under this renormalization and under bifurcations can differ significantly.  相似文献   

12.
Computing LTS Regression for Large Data Sets   总被引:9,自引:0,他引:9  
Data mining aims to extract previously unknown patterns or substructures from large databases. In statistics, this is what methods of robust estimation and outlier detection were constructed for, see e.g. Rousseeuw and Leroy (1987). Here we will focus on least trimmed squares (LTS) regression, which is based on the subset of h cases (out of n) whose least squares fit possesses the smallest sum of squared residuals. The coverage h may be set between n/2 and n. The computation time of existing LTS algorithms grows too much with the size of the data set, precluding their use for data mining. In this paper we develop a new algorithm called FAST-LTS. The basic ideas are an inequality involving order statistics and sums of squared residuals, and techniques which we call ‘selective iteration’ and ‘nested extensions’. We also use an intercept adjustment technique to improve the precision. For small data sets FAST-LTS typically finds the exact LTS, whereas for larger data sets it gives more accurate results than existing algorithms for LTS and is faster by orders of magnitude. This allows us to apply FAST-LTS to large databases.  相似文献   

13.
The Canny Edge Detector Revisited   总被引:1,自引:0,他引:1  
Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992).  相似文献   

14.
POP: Patchwork of Parts Models for Object Recognition   总被引:2,自引:0,他引:2  
We formulate a deformable template model for objects with an efficient mechanism for computation and parameter estimation. The data consists of binary oriented edge features, robust to photometric variation and small local deformations. The template is defined in terms of probability arrays for each edge type. A primary contribution of this paper is the definition of the instantiation of an object in terms of shifts of a moderate number local submodels—parts—which are subsequently recombined using a patchwork operation, to define a coherent statistical model of the data. Object classes are modeled as mixtures of patchwork of parts POP models that are discovered sequentially as more class data is observed. We define the notion of the support associated to an instantiation, and use this to formulate statistical models for multi-object configurations including possible occlusions. All decisions on the labeling of the objects in the image are based on comparing likelihoods. The combination of a deformable model with an efficient estimation procedure yields competitive results in a variety of applications with very small training sets, without need to train decision boundaries—only data from the class being trained is used. Experiments are presented on the MNIST database, reading zipcodes, and face detection.  相似文献   

15.
The notion of P-simple points was introduced by Bertrand to conceive parallel thinning algorithms. In ‘A 3D fully parallel thinning algorithm for generating medial faces’ (Pattern Recogn. Lett. 16:83–87, 1995), Ma proposed an algorithm for which there are objects whose topology is not preserved. In this paper, we propose a new application of P-simple points: to automatically correct Ma’s algorithm.  相似文献   

16.
We study properties of non-uniform reductions and related completeness notions. We strengthen several results of Hitchcock and Pavan (ICALP (1), Lecture Notes in Computer Science, vol. 4051, pp. 465–476, Springer, 2006) and give a trade-off between the amount of advice needed for a reduction and its honesty on NEXP. We construct an oracle relative to which this trade-off is optimal. We show, in a more systematic study of non-uniform reductions, among other things that non-uniformity can be removed at the cost of more queries. In line with Post’s program for complexity theory (Buhrman and Torenvliet in Bulletin of the EATCS 85, pp. 41–51, 2005) we connect such ‘uniformization’ properties to the separation of complexity classes.  相似文献   

17.
Managing dynamic environments often requires decision making under uncertainty and risk. Two types of uncertainty are involved: uncertainty about the state and the evolution of the situation, and ‘openness’ of the possible actions to face possible consequences. In an experimental study on risk management in dynamic situations, two contrasted ‘ecological’ scenarios – transposed from effective situations of emergency management – were compared in order to identify the impact of their ‘openness’ in the subjects’ strategies for decision making. The ‘Lost Child’ scenario presented qualitative and irreversible consequences (child’s death) and high uncertainty; it exerted high demands both in risk assessment (risk representation) and action elaboration and choice. A less open situation (‘Hydrocarbon Fire’) required a main choice between two contrasted actions, with quantitative computable consequences. The strategies of ‘experimental subjects’ (university students) and ‘operative subjects’ (professional fire-fighter officers) were compared in order to evaluate the ecological validity of experimental research in this field, from the point of view of the subjects themselves. The two scenarios appeared to be independent, so that quite different models of decision making have to be hypothesised, differing by the importance of assessing risk and defining possible actions on the one hand, and by the process of choice on the other. ‘Experimental’ subjects dramatically differed from ‘operative’ subjects when confronted with the same scenario, particularly for the less technical but more demanding scenario. It is hypothesised that three components might account for the effect of the situations and for the differences between and within groups of subjects: importance of situation assessment, spatial abilities, and global orientation of activity in managing dynamic risk.  相似文献   

18.
This paper studies the use of hypothetical and value-based reasoning in US Supreme-Court cases concerning the United States Fourth Amendment. Drawing upon formal AI & Law models of legal argument a semi-formal reconstruction is given of parts of the Carney case, which has been studied previously in AI & law research on case-based reasoning. As part of the reconstruction, a semi-formal proposal is made for extending the formal AI & Law models with forms of metalevel reasoning in several argument schemes. The result is compared with Rissland’s (1989) analysis in terms of dimensions and Ashley’s (2008) analysis in terms of his process model of legal argument with hypotheticals.  相似文献   

19.
LES of reacting flows is rapidly becoming mature and providing levels of precision which can not be reached with any RANS (Reynolds Averaged) technique. In addition to the multiple subgrid scale models required for such LES and to the questions raised by the required numerical accuracy of LES solvers, various issues related to the reliability, mesh independence and repetitivity of LES must still be addressed, especially when LES is used on massively parallel machines. This talk discusses some of these issues: (1) the existence of non physical waves (known as ‘wiggles’ by most LES practitioners) in LES, (2) the effects of mesh size on LES of reacting flows, (3) the growth of rounding errors in LES on massively parallel machines and more generally (4) the ability to qualify a LES code as ‘bug free’ and ‘accurate’. Examples range from academic cases (minimum non-reacting turbulent channel) to applied configurations (a sector of an helicopter combustion chamber).  相似文献   

20.
The development of autonomous mobile machines to perform useful tasks in real work environments is currently being impeded by concerns over effectiveness, commercial viability and, above all, safety. This paper introduces a case study of a robotic excavator to explore a series of issues around system development, navigation in unstructured environments, autonomous decision making and changing the behaviour of autonomous machines to suit the prevailing demands of users. The adoption of the Real-Time Control Systems (RCS) architecture (Albus, 1991) is proposed as a universal framework for the development of intelligent systems. In addition it is explained how the use of Partially Observable Markov Decision Processes (POMDP) (Kaelbling et al., 1998) can form the basis of decision making in the face of uncertainty and how the technique can be effectively incorporated into the RCS architecture. Particular emphasis is placed on ensuring that the resulting behaviour is both task effective and adequately safe, and it is recognised that these two objectives may be in opposition and that the desired relative balance between them may change. The concept of an autonomous system having “values” is introduced through the use of utility theory. Limited simulation results of experiments are reported which demonstrate that these techniques can create intelligent systems capable of modifying their behaviour to exhibit either ‘safety conscious’ or ‘task achieving’ personalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号