首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextCompanies increasingly strive to adapt to market and ecosystem changes in real time. Gauging and understanding team performance in such changing environments present a major challenge.ObjectiveThis paper aims to understand how software developers experience the continuous adaptation of performance in a modern, highly volatile environment using Lean and Agile software development methodology. This understanding can be used as a basis for guiding formation and maintenance of high-performing teams, to inform performance improvement initiatives, and to improve working conditions for software developers.MethodA qualitative multiple-case study using thematic interviews was conducted with 16 experienced practitioners in five organisations.ResultsWe generated a grounded theory, Performance Alignment Work, showing how software developers experience performance. We found 33 major categories of performance factors and relationships between the factors. A cross-case comparison revealed similarities and differences between different kinds and different sizes of organisations.ConclusionsBased on our study, software teams are engaged in a constant cycle of interpreting their own performance and negotiating its alignment with other stakeholders. While differences across organisational sizes exist, a common set of performance experiences is present despite differences in context variables. Enhancing performance experiences requires integration of soft factors, such as communication, team spirit, team identity, and values, into the overall development process. Our findings suggest a view of software development and software team performance that centres around behavioural and social sciences.  相似文献   

2.
Formal notations like B or action systems support a notion of refinement. Refinement relates an abstract specification A to a concrete specification C that is as least as deterministic. Knowing A and C one proves that C refines, or implements, specification A. In this study we consider specification A as given and concern ourselves with a way to find a good candidate for implementation C. To this end we classify all implementations of an abstract specification according to their performance. We distinguish performance from correctness. Concrete systems that do not meet the abstract specification correctly are excluded. Only the remaining correct implementations C are considered with respect to their performance. A good implementation of a specification is identified by having some optimal behaviour in common with it. In other words, a good refinement corresponds to a reduction of non-optimal behaviour. This also means that the abstract specification sets a boundary for the performance of any implementation. We introduce the probabilistic action system formalism which combines refinement with performance. In our current study we measure performance in terms of long-run expected average-cost. Performance is expressed by means of probability and expected costs. Probability is needed to express uncertainty present in physical environments. Expected costs express physical or abstract quantities that describe a system. They encode the performance objective. The behaviour of probabilistic action systems is described by traces of expected costs. A corresponding notion of refinement and simulation-based proof rules are introduced. Probabilistic action systems are based on discrete-time Markov decision processes. Numerical methods solving the optimisation problems posed by Markov decision processes are well-known, and used in a software tool that we have developed. The tool computes an optimal behaviour of a specification A thus assisting in the search for a good implementation C.Received September 2002 Accepted in revised form January 2004 by E.C.R. Hehner  相似文献   

3.
IntroductionThe main purpose of this cross-sectional study was to investigate whether visual discomfort acts as a mediating factor between perceived visual ergonomic working conditions and self-rated visual performance among office workers who carry out administrative tasks and computer-based work at the Swedish Tax Agency.MethodsA questionnaire was sent to 94 office workers addressing: 1) perceived visual quality of the visual display units; 2) prevalence of eye symptoms; and 3) self-rated visual performance. Eighty-six persons (54 women (63%), 31 men (36%), and 1 of unspecified sex) answered the questionnaire. Multiple regression analysis investigated the association between visual ergonomic working conditions and visual performance, both with and without visual discomfort as a mediator.ResultsThe group mean of the Indexed survey questions indicated a reasonably good quality of visual ergonomic working conditions, a relative absence of eye symptoms, and acceptable self-rated visual performance. Results from multiple regression analysis showed a significant association between perceived visual ergonomic working conditions and self-rated visual performance (r2 = 0.30, β = 0.327, p < 0.01). When visual discomfort was used as a mediator, the association between perceived visual ergonomic working conditions and self-rated visual performance remained the same (r2 = 0.32, β = 0.315, p < 0.01).DiscussionIt was remarkable to discover that self-rated visual performance was independent of visual discomfort. Possible explanations include exposure factors not included in the current study, such as dry air and sensory irritation in the eyes, psychosocial stress, time spent performing near work activities, or time exposed to visually deficient working conditions.Relevance to industryThe strong connection between satisfaction with visual ergonomic working conditions and productivity in this study has implications for workplace profitability and staff satisfaction. If productivity is enhanced by better visual ergonomic working conditions, then managers of workplaces may be able to improve work outcomes by optimizing the physical work environment.  相似文献   

4.
《Ergonomics》2012,55(11):1392-1399
Abstract

The aims of the study were to investigate the effects of race gaming experience in playing racing video games on gaze behaviour and performance of drivers and the effects of natural driving experience on gaze behaviour and performance of gamers. Thirty participants, divided into drivers-gamers, drivers-non-gamers and non-drivers-gamers, were asked to drive in a race circuit as fast as possible while their eye movements were recorded. Drivers-gamers spent more time looking at the lane than non-drivers-gamers. Furthermore, drivers-gamers performed greater number of fixations towards the speedometer and showed faster performance in the racing task than the drivers-non-gamers. Combining natural driving and race gaming experiences changed the gaze location strategy of drivers.

Practitioner summary: Racing video games practitioners have high propensity to exhibit attitudes and intentions of risky driving behaviour. Combining natural driving and race gaming experiences affects gaze behaviour strategy of drivers.

Abbreviations: DG: Drivers-gamers; DNG: Drivers-non-gamers; NDG: Non-drivers-gamers; AOIs: Areas of Interest; r-NUMFIX: Relative number of fixations; r-DURFIX: Relative fixations duration  相似文献   

5.
This research is concerned with a gradient descent training algorithm of a min-max network which we will refer to as the target network. Training makes use of a helper feed-forward network (FFN) to represent the performance function used in training the target network. A helper FFN is trained because the mathematical form of the performance function for the target network in terms of its trainable parameters, p, is not differentiable. Values for the parameter vector, p, of the target network are generated randomly and performance values are determined to produce the data for training the helper FFN with its own connection matrices. Thus we find an approximation to the mathematical relationship between performance values and p by training an FFN. The input to this FFN is a value for p and the output is a performance measure. The transfer function of the helper FFN provides a differentiable function for the performance function of the parameter vector, p, for the target network allowing gradient search methods for finding the optimum p for the target network. The method is successfully tried in approximating a given function and also on training data produced by a randomly selected min-max network.  相似文献   

6.
ContextOrganizational performance measurements in software product development have received a lot of attention in the literature. Still, there is a general discontent regarding the way performance is evaluated in practice, with few studies really focusing on why this is the case. In this paper research focusing on the context of developing software-intensive products in large established multi-national organizations is reported on.ObjectiveThe purpose of this research is to investigate performance measurement practices related to software product development activities. More specifically, focus is on exploring how managers engaged in software product development activities perceive and evaluate performance in large organizations from a managerial perspective.MethodThe research approach pursued in this research consist of exploratory multiple case studies. Data is collected mainly through 54 interviews in five case studies in large international organizations developing software-intensive products in Sweden. Focused group interviews with senior managers from eight companies have also been used in the data collection.ResultsThe results of this research indicate that managers within software product development in general are dissatisfied with their current way of evaluating performance. Performance measurements and the perception of performance are today focused on cost, time, and quality, i.e. what is easily measurable and not necessarily what is important. The dimensions of value creation and learning are missing. Moreover, measurements tend to be result oriented, rather than process oriented, making it difficult to integrate these measurements in the management practices.ConclusionManagers that are dissatisfied with their performance measurement system and want to improve the current situation should not start by focusing on the current measurements directly; instead they should focus on how the organization perceives performance and how important performance criteria are being developed. By developing relevant performance criteria the first step in developing an effective performance measurement system is made. Moreover, it is concluded that manager’s perception of performance is affected by the currently used measurements, hence limiting the scope of the performance criteria. Thus, a change in the way managers perceive performance is necessary before there can be any changes in the way performance is evaluated.  相似文献   

7.
ContextTo determine the effectiveness of software testers a suitable performance appraisal approach is necessary, both for research and practice purposes. However, review of relevant literature reveals little information of how software testers are appraised in practice.Objective(i) To enhance our knowledge of industry practice of performance appraisal of software testers and (ii) to collect feedback from project managers on a proposed performance appraisal form for software testers.MethodA web-based survey with questionnaire was used to collect responses. Participants were recruited using cluster and snowball sampling. 18 software development project managers participated.ResultsWe found two broad trends in performance appraisal of software testers – same employee appraisal process for all employees and a specialized performance appraisal method for software testers. Detailed opinions were collected and analyzed on how performance of software testers should be appraised. Our proposed appraisal approach was generally well-received.ConclusionFactors such as number of bugs found after delivery and efficiency of executing test cases were considered important in appraising software testers’ performance. Our proposed approach was refined based on the feedback received.  相似文献   

8.
ContextSoftware defect prediction has been widely studied based on various machine-learning algorithms. Previous studies usually focus on within-company defects prediction (WCDP), but lack of training data in the early stages of software testing limits the efficiency of WCDP in practice. Thus, recent research has largely examined the cross-company defects prediction (CCDP) as an alternative solution.ObjectiveHowever, the gap of different distributions between cross-company (CC) data and within-company (WC) data usually makes it difficult to build a high-quality CCDP model. In this paper, a novel algorithm named Double Transfer Boosting (DTB) is introduced to narrow this gap and improve the performance of CCDP by reducing negative samples in CC data.MethodThe proposed DTB model integrates two levels of data transfer: first, the data gravitation method reshapes the whole distribution of CC data to fit WC data. Second, the transfer boosting method employs a small ratio of labeled WC data to eliminate negative instances in CC data.ResultsThe empirical evaluation was conducted based on 15 publicly available datasets. CCDP experiment results indicated that the proposed model achieved better overall performance than compared CCDP models. DTB was also compared to WCDP in two different situations. Statistical analysis suggested that DTB performed significantly better than WCDP models trained by limited samples and produced comparable results to WCDP with sufficient training data.ConclusionsDTB reforms the distribution of CC data from different levels to improve the performance of CCDP, and experimental results and analysis demonstrate that it could be an effective model for early software defects detection.  相似文献   

9.
10.
PurposeThe purpose of this paper is to investigate the impact of Supply Chain Information Integration (SCII) on the Operational Performance of manufacturing firms in Malaysia considering the role of information leakage.Design/methodology/approachTo test the model developed, we conducted an online questionnaire survey with Malaysian manufacturing companies drawn from the Federation of Malaysian Manufacturers directory of 2018. Out of the 400 questionnaires sent out to the manufacturing companies, 144 useable responses were received giving a response rate of 36 %. The data were analyzed using SmartPLS, a second-generation statistical tool.FindingsThe findings of this study showed that information quality, information security, and information technology (IT) had a positive effect on SCII with an explanatory power of 47.2 % while SCII, in turn, had a positive effect on operational performance explaining 17% of the variance. Intentional information leakage (IIL) moderated the relationship between SCII and operational performance, whereas accidental information leakage did not moderate the same relationship.Practical implicationsThis study provides insights into difficulties faced when implementing SCII, particularly by medium and large manufacturing companies in Malaysia. It helps identify appropriate strategies that can guide the management in its effort to improve performance by SCII.Originality/valueThis research is arguably the first study that simultaneously investigates the effect of information quality, IT, and information security on SCII and the moderating effect of information leakage on the relationships between SCII and operational performance. The results of this study indicate that information security has the largest impact on SCII, followed by IT, and information quality. Furthermore, IIL as a negative aspect of information integration may deprive the strength of the relationship between SCII and operational performance.  相似文献   

11.
Abstract

Decision makers often make poor use of the information provided by an automated signal detection aid; recent studies have found that participants assisted by an automated aid fell well short of best-possible sensitivity levels. The present study tested the generalisability of this finding over varying levels of aid reliability. Participants performed a binary signal detection task either unaided or with assistance from a decision aid that was 60%, 85%, or 96%-reliable. Assistance from a highly reliable aid (85% or 96%) improved discrimination performance, while assistance from a low-reliability aid (60%) did not. Because their ideal strategy is to place less weight on less reliable cues, however, the decision makers’ tendency to disuse the aid became more appropriate as the aid’s reliability declined. Automation-aided efficiency was thus near to optimal when the aid was close to chance but became highly inefficient, ironically, as the aid’s reliability increased.

Practitioner Summary: Investigating operators’ automation-aided information integration strategies allows human factors practitioners to predict the level of performance the operator will attain. Ironically, in an aided signal detection task, performance when assisted by a highly reliable aid is far less efficient than that obtained when assisted by a far less reliable aid.

Abbreviations: OW: optimal weighting; UW: uniform weighting; CC: contingent criterion; BD: best decides; CF: coin flip; PM: probability matching; HDI: highest density interval; MCMC: markov chain monte carlo; HR: hit rate; FAR: false alarm rate  相似文献   

12.
ContextFault handling represents a very important aspect of business process functioning. However, fault handling has thus far been solved statically, requiring the definition of fault handlers and handling logic to be defined at design time, which requires a great deal of effort, is error-prone and relatively difficult to maintain and extend. It is sometimes even impossible to define all fault handlers at design time.ObjectiveTo address this issue, we describe a novel context-aware architecture for fault handling in executable business process, which enables dynamic fault handling during business process execution.MethodWe performed analysis of existing fault handling disadvantages of WS-BPEL. We designed the artifact which complements existing statically defined fault handling in such a way that faults can be defined dynamically during business process run-time. We evaluated the artifact with analysis of system performance and performed a comparison against a set of well known workflow exception handling patterns.ResultsWe designed an artifact, that comprises an Observer component, Exception Handler Bus, Exception Knowledge Base and Solution Repository. A system performance analysis shows a significantly decreased repair time with the use of context aware activities. We proved that the designed artifact extends the range of supported workflow exception handling patterns.ConclusionThe artifact presented in this research considerably improves static fault handling, as it enables the dynamic fault resolution of semantically similar faults with continuous enhancement of fault handling in run-time. It also results in broader support of workflow exception handling patterns.  相似文献   

13.
《Ergonomics》2012,55(11):1485-1488
Abstract

Vigilance is the ability of an observer to maintain attention for extended periods of time; however, performance tends to decline with time on watch, a pattern referred to as the vigilance decrement. Previous research has focused on factors that attenuate the decrement; however, one factor rarely studied is the effect of social facilitation. The purpose for the present investigation was to determine how different types of social presence affected the performance, workload and stress of vigilance. It was hypothesised that the presence of a supervisory figure would increase overall performance, but may occur at the cost of increased workload and stress. Results indicated that the per cent of false alarm and response times decreased in the presence of a supervisory figure. Using social facilitation in vigilance tasks may thus have positive, as well as, negative effects depending on the dependent measure of interest and the role of the observer.

Practitioner Summary: Social facilitation has rarely been examined in the context of vigilance, even though it may improve performance. Vigilance task performance was examined under social presence. The results of the present study indicated that false alarms and response times decreased in the social presence of a supervisory figure, thus improving performance.  相似文献   

14.
PurposeThe performance of discrete items manufacturing systems (MS) is a primary concern of industrial firms. However, the understanding of the interrelations between performance and its key factors requires further advancements. Thus, several questions remain unanswered in the Operations and Production Management (OPM) field to understand and manage the relationship between these key factors. To address these challenges, this paper conceptualizes and examines the relevant antecedents and essential elements for the design and optimization of competitive MS.Design/methodology/approachFirst, drawing on the consolidated OPM literature, a novel conceptual model was developed incorporating the conceptual relationships essential to MS performance. Second, we conducted a systematic literature review based on the PRISMA protocol to analyze and validate the proposed conceptual model and to indicate the field’s current knowledge gaps and future research directions.FindingsFindings validated the proposed conceptual model by establishing the complex causal interrelations among key factors that influence discrete MS performance. Moreover, we found that the operational performance of discrete MS is multidimensional and directly dependent on the variables and mechanisms associated with the production flow. Findings also demonstrated that the degree of importance of the antecedents of MS performance vary and are temporally interrelated. Lastly, the paper advances the understanding of MS by revealing the predominance of quantitative approaches (e.g., discrete events simulation and closed mathematical models) in the literature as well as an emphasis on describing these approaches rather than characterizing MS appropriately.Research and Practical implicationsThis paper extends our knowledge on the operational performance challenges in discrete MS by proposing a visual, direct, and intuitive conceptual model that enables firms to better comprehend these complex challenges. This research also answers ongoing calls for investigations of the antecedents and elements of competitive MS design and optimization. Our findings show that decision-making in discrete MS is established temporally based on strategic, operational, and control definitions, influencing firms’ operational performance. Finally, since it draws on seminal OPM literature specializing in MS, this study informs scholars, industrial managers, and aid decision-making about discrete MS.Originality/valueThe first original aspect of this study lies in bridging the gaps identified in the OPM literature by providing a robust conceptual framework that highlights the key factors of operational performance in discrete MS. Its second original aspect is that it adopts different information sources in an independent and complementary way to achieve greater generalizability and robustness of the contributions.  相似文献   

15.
Generic Packet Radio Service (GPRS) enhances data transfers up to occasional transmission of large amounts of user data. Signaling procedures are specified for the provision of connection oriented services and the establishment of data channels between mobile subscribers. The time-overhead introduced with the performance of end-to-end signaling operations is a crucial performance factor that determines the provided Quality of Service (QoS). A significant time-overhead associated with the high rate establishment and release of many short-lived data channels, required during hand-over or for Internet access, would result in network performance degradation.Nevertheless, despite its main significance, there is a lack of papers in the Internet community that either investigate the issue of GPRS signaling performance or present GPRS trials and measurements. This lack becomes even more obvious and gets more importance from the moment that the deployment of GPRS, at least in the first phase, revealed significant performance delays in the connection setup times.To this respect, this paper presents the experiments conducted at a GPRS testing platform focusing on the performance assessment of signaling functionality. The trials focus on the performance evaluation of the GPRS signaling operations related to the establishment and release of user data channels through the Gn and Gi interfaces. Timing results that quantify the overall delay under diverse conditions of signaling load, rate and subscribers are presented.  相似文献   

16.

Generative adversarial network (GAN) models have been successfully utilized in a wide range of machine learning applications, and tabular data generation domain is not an exception. Notably, some state-of-the-art models of tabular data generation, such as CTGAN,  TableGanMedGAN, etc. are based on GAN models. Even though these models have resulted in superior performance in generating artificial data when trained on a range of datasets, there is a lot of room (and desire) for improvement. Not to mention that existing methods do have some weaknesses other than performance. For example, the current methods focus only on the performance of the model, and limited emphasis is given on the interpretation of the model. Secondly, the current models operate on raw features only, and hence they fail to exploit any prior knowledge on explicit feature interactions that can be utilized during data generation process. To alleviate the two above-mentioned limitations, in this work, we propose a novel tabular data generation model—Generative Adversarial Network modelling inspired from Naive Bayes and Logistic Regression’s relationship (\({ { \texttt {GANBLR} } }\)), which not only address the interpretation limitation of existing tabular GAN-based models but provides capability to handle explicit feature interactions as well. Through extensive evaluations on wide range of datasets, we demonstrate \({ { \texttt {GANBLR} } }\)’s superior performance as well as better interpretable capability (explanation of feature importance in the synthetic generation process) as compared to existing state-of-the-art tabular data generation models.

  相似文献   

17.
ContextDomain-Specific Visual Languages (DSVLs) play a crucial role in Model-Driven Engineering (MDE). Most DSVLs already allow the specification of the structure and behavior of systems. However, there is also an increasing need to model, simulate and reason about their non-functional properties. In particular, QoS usage and management constraints (performance, reliability, etc.) are essential characteristics of any non-trivial system.ObjectiveVery few DSVLs currently offer support for modeling these kinds of properties. And those which do, tend to require skilled knowledge of specialized notations, which clashes with the intuitive nature of DSVLs. In this paper we present an alternative approach to specify QoS properties in a high-level and platform-independent manner.MethodWe propose the use of special objects (observers) that can be added to the graphical specification of a system for describing and monitoring some of its non-functional properties.ResultsObservers allow extending the global state of the system with the variables that the designer wants to analyze, being able to capture the performance properties of interest. A performance evaluation tool has also been developed as a proof of concept for the proposal.ConclusionThe results show how non-functional properties can be specified in DSVLs using observers, and how the performance of systems specified in this way can be evaluated in a flexible and effective way.  相似文献   

18.
ContextA distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s).ObjectiveWe aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes.MethodBased on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature.ResultsOur evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code.ConclusionOur approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.  相似文献   

19.
ObjectiveMany machine learning models have aided medical specialists in diagnosis and prognosis for breast cancer. Accuracy has been regarded as a primary measurement for the performance evaluation of the models, but stability which indicates the robustness of the performance to model parameter variation also becomes essential. A stable model is in practice of benefit to the medical specialists who may have little expertise in model tuning. The main purpose of this work is to address the importance of the stability of a model and to suggest one of such models.MethodsA comparative study of three prominent machine learning models was carried out for the prognosis of breast-cancer survivability: support vector machines, artificial neural networks, and semi-supervised learning models.MaterialThe surveillance, epidemiology, and end results database for breast cancer was used, which is known as the most comprehensive source of information on cancer incidence in the United States.ResultsThe best performance was obtained from the semi-supervised learning model. It showed good overall accuracy and stability under model parameter variation. The sharpening procedure enhanced the stability of the model via the noise-reduction.ConclusionWe suggest that semi-supervised learning model is a good candidate that medical professionals readily employ without consuming the time and effort for parameter searching for a specific model. The ease of use and faster time to results of the predictive model will eventually lead to the accurate and less-invasive prognosis for breast cancer patients.  相似文献   

20.
《Ergonomics》2012,55(11):1462-1473
Abstract

As light sources based on light emitting diodes (LED) are increasingly used to replace classic tungsten-based light sources in household lighting applications, possible impairments of colour perception under those light sources due to a different spectral power distribution become a major concern. The Colour Rendering Index (CRI) which is the only measure available to the end user is controversial and does not represent a comprehensive measure of colour perception. Aspects of colour perception disregarded by the CRI such as colour discrimination have to be taken into account as well. Therefore, we evaluated colour discrimination performance under a commercially available phosphor-converted LED light source from a popular brand (OSRAM) in comparison to a classic tungsten-based halogen light source. Colour discrimination performance was not affected by the type of light source, indicating that the phosphor-converted LED light source enables colour discrimination performance comparable to that of halogen lighting despite being associated with a lower CRI.

Practitioner summary: Considering the increasing use of energy efficient light sources, we compared colour discrimination under a common type of phosphor-converted LED and under traditional halogen lighting. Colour discrimination performance was comparable in both lighting conditions, indicating that the phosphor-converted LED can replace halogen lighting without sacrificing colour discrimination for energy efficiency.

Abbreviations: LED: light emitting diode; CRI: colour rendering index; CCT: correlated colour temperature; CIE: commission internationale de l’éclairage; FMHT: Farnsworth-Munsell 100-Hue Test; lm: lumen; lx: lux, lumen/m^2; W: watt; nm: nanometer; K: kelvin  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号