共查询到20条相似文献,搜索用时 31 毫秒
1.
Nadja Betzler Michael R. Fellows Jiong Guo Rolf Niedermeier Frances A. Rosamond 《Theoretical computer science》2009
The computation of Kemeny rankings is central to many applications in the context of rank aggregation. Given a set of permutations (votes) over a set of candidates, one searches for a “consensus permutation” that is “closest” to the given set of permutations. Unfortunately, the problem is NP-hard. We provide a broad study of the parameterized complexity for computing optimal Kemeny rankings. Besides the three obvious parameters “number of votes”, “number of candidates”, and solution size (called Kemeny score), we consider further structural parameterizations. More specifically, we show that the Kemeny score (and a corresponding Kemeny ranking) of an election can be computed efficiently whenever the average pairwise distance between two input votes is not too large. In other words, Kemeny Score is fixed-parameter tractable with respect to the parameter “average pairwise Kendall–Tau distance da”. We describe a fixed-parameter algorithm with running time 16⌈da⌉⋅poly. Moreover, we extend our studies to the parameters “maximum range” and “average range” of positions a candidate takes in the input votes. Whereas Kemeny Score remains fixed-parameter tractable with respect to the parameter “maximum range”, it becomes NP-complete in the case of an average range of two. This excludes fixed-parameter tractability with respect to the parameter “average range” unless P=NP. Finally, we extend some of our results to votes with ties and incomplete votes, where in both cases one no longer has permutations as input. 相似文献
2.
The most effective way to maximize the lifetime of a wireless sensor network (WSN) is to allocate initial energy to sensors such that they exhaust their energy at the same time. The lifetime of a WSN as well as an optimal initial energy allocation are determined by a network design. The main contribution of the paper is to show that the lifetime of a WSN can be maximized by an optimal network design. We represent the network lifetime as a function of the number m of annuli and show that m has significant impact on network lifetime. We prove that if the energy consumed by data transmission is proportional to dα+c, where d is the distance of data transmission and α and c are some constants, then for a circular area of interest with radius R, the optimal number of annuli that maximizes the network lifetime is m=R((α−1)/c)1/α for an arbitrary sensor density function. 相似文献
3.
4.
5.
6.
Motivated by recent applications of wireless sensor networks in monitoring infrastructure networks, we address the problem of optimal coverage of infrastructure networks using sensors whose sensing performance decays with distance. We show that this problem can be formulated as a continuous p-median problem on networks. The literature has addressed the discrete p-median problem on networks and in continuum domains, and the continuous p-median problem in continuum domains extensively. However, in-depth analysis of the continuous p-median problem on networks has been lacking. With the sensing performance model that decays with distance, each sensor covers a region equivalent to its Voronoi partition on the network in terms of the shortest path distance metric. Using Voronoi partitions, we define a directional partial derivative of the coverage metric with respect to a sensor’s location. We then propose a gradient descent algorithm to obtain a locally optimal solution with guaranteed convergence. The quality of an optimal solution depends on the choice of the initial configuration of sensors. We obtain an initial configuration using two approaches: by solving the discrete p-median problem on a lumped network and by random sampling. We consider two methods of random sampling: uniform sampling and D2-sampling. The first approach with the initial solution of the discrete p-median problem leads to the best coverage performance for large networks, but at the cost of high running time. We also observe that the gradient descent on the initial solution with the D2-sampling method yields a solution that is within at most 7% of the previous solution and with much shorter running time. 相似文献
7.
C. Corsi 《Microsystem Technologies》1995,1(3):149-154
The term Smart Sensors, one of the most strategic devices of Micro System Technologies (MST), refers to sensors which contain both sensing and signal processing capabilities with objectives ranging from simple viewing to sophisticated remote sensing, surveillance, search/track, weapon guidance, robotics, perceptronics and intelligence applications. This paper deals specifically with a new class of smart sensors in infrared spectral bands whose development started some years ago, when it was recognized that the rapid advances of very large scale integration (VLSI) processor technology and mosaic infrared detector array technology could be combined to develop a new generation of infrared smart sensor systems with much improved performances. Sophisticated signal processing operations have been studied for these new systems by integrating microcomputers and other VLSI signal processors within or next to the sensor arrays on the same focal plane, avoiding computing systems located far away from sensors. Recently, this approach has being upgraded by introducing inside the sensor some of the basic function of living eyes, such as dynamic stare, dishomogenity compensation, spatial and temporal filtering. New objectives and requirements and the relevant microsystems technologies are presented for this type of new infrared smart sensors which has been originally developed by us. 相似文献
8.
B.S. ManojAuthor Vitae Archana SekharAuthor VitaeC. Siva Ram Murthy 《Journal of Parallel and Distributed Computing》2009
Sensor networks are increasingly being used for applications which require fast processing of data, such as multimedia processing and collaboration among sensors to relay observed data to a base station (BS). Distributed computing can be used on a sensor network to reduce the completion time of a task (an application) and distribute the energy consumption equitably across all sensors, so that certain sensors do not die out faster than the others. The distribution of task modules to sensors should consider not only the time and energy savings, but must also improve reliability of the entire task execution. We formulate the above as an optimization problem, and use the A∗ algorithm with improvements to determine an optimal static allocation of modules among a set of sensors. We also suggest a faster algorithm, called the greedy A∗ algorithm, if a sub-optimal solution is sufficient. Both algorithms have been simulated, and the results have been compared in terms of energy savings, decrease in completion time of the task, and the deviation of the sub-optimal solution from the optimal one. The sub-optimal solution required 8%–35% less computation, at the cost of 2.5%–15% deviation from the optimal solution in terms of average energy spent per sensor node. Both the A∗ and greedy A∗ algorithms have been shown to distribute energy consumption more uniformly across sensors than centralized execution. The greedy A∗ algorithm is found to be scalable, as the number of evaluations in determining the allocation increases linearly with the number of sensors. 相似文献
9.
10.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme. 相似文献
11.
12.
13.
14.
15.
Soheil Bahrampour Asok Ray Soumalya Sarkar Thyagaraju Damarla Nasser M. Nasrabadi 《Pattern recognition letters》2013,34(16):2126-2134
This paper addresses the problem of target detection and classification, where the performance is often limited due to high rates of false alarm and classification error, possibly because of inadequacies in the underlying algorithms of feature extraction from sensory data and subsequent pattern classification. In this paper, a recently reported feature extraction algorithm, symbolic dynamic filtering (SDF), is investigated for target detection and classification by using unmanned ground sensors (UGS). In SDF, sensor time series data are first symbolized to construct probabilistic finite state automata (PFSA) that, in turn, generate low-dimensional feature vectors. In this paper, the performance of SDF is compared with that of two commonly used feature extractors, namely Cepstrum and principal component analysis (PCA), for target detection and classification. Three different pattern classifiers have been employed to compare the performance of the three feature extractors for target detection and human/animal classification by UGS systems based on two sets of field data that consist of passive infrared (PIR) and seismic sensors. The results show consistently superior performance of SDF-based feature extraction over Cepstrum-based and PCA-based feature extraction in terms of successful detection, false alarm, and misclassification rates. 相似文献
16.
17.
Research into ambient assisted living (AAL) strives to ease the daily lives of people with disabilities or chronic medical conditions. AAL systems typically consist of multitudes of sensors and embedded devices, generating large amounts of medical and ambient data. However, these biomedical sensors lack the processing power to perform key monitoring and data-aggregation tasks, necessitating data transmission and computation at central locations. The focus here is on the development of a scalable and context-aware framework and easing the flow between data collection and data processing. The resource-constrained nature of typical wearable body sensors is factored into our proposed model, with cloud computing features utilized to provide a real-time assisted-living service. With the myriad of distributed AAL systems at play, each with unique requirements and eccentricities, the challenge lies in the need to service these disparate systems with a middleware layer that is both coherent and flexible. There is significant complexity in the management of sensor data and the derivation of contextual information, as well as in the monitoring of user activities and in locating appropriate situational services. The proposed CoCaMAAL model seeks to address such issues and implement a service-oriented architecture (SOA) for unified context generation. This is done by efficiently aggregating raw sensor data and the timely selection of appropriate services using a context management system (CMS). With a unified model that includes patients, devices, and computational servers in a single virtual community, AAL services are enhanced. We have prototyped the proposed model and implemented some case studies to demonstrate its effectiveness. 相似文献
18.
多尺度PCA在传感器故障诊断中的应用研究 总被引:4,自引:0,他引:4
A multiscale principal component analysis method is proposed for sensor fault detection and identification. After decomposition of sensor signal by wavelet transform, the coarse-scale coef-ficients from the sensors with strong correlation are employed to establish the principal component analysis model. A moving window is designed to monitor data from each sensor using the model.For the purpose of sensor fault detection and identification, the data in the window is decomposed with wavelet transform to acquire the coarse-scale coefficients firstly, and the square prediction error is used to detect the failure. Then the sensor validity index is introduced to identify faulty sensor,which provides a quantitative identifying index rather than qualitative contrast given by the approach with contribution. Finally, the applicability and effectiveness of the proposed method is illustrated by sensors of industrial boiler. 相似文献
19.
20.
Kai Xing Xiuzhen Cheng Fang Liu Shmuel Rotenstreich 《Journal of Parallel and Distributed Computing》2007
We propose a novel vision for roadway safety warning based on sensor networks, aiming at providing user-friendly zero-delay safety warnings to motorists. Our idea leverages the advanced sensing, networking and storage technologies. Roadway sensors detect events and store event records at multiple designated locations along the against traffic direction, such that the passing-by drivers can be alerted to potential dangers or traffic delays through the wireless communication between roadway sensors and the vehicle. We design a location-centric storage (LCS) protocol, which manages the propagation and storage of event records based on the time needed to clear the road. In LCS, the density of the sensors storing an event record decreases logarithmically with respect to the distance to the event location. Thus, the closer to the event position, the more number of warnings a driver may obtain. LCS is further tailored for the case of “highway” sensor networks when all sensors are deployed along a straight line mimicking a highway, and the more complex case when two roads intersect at some place. We conduct both theoretic analysis and simulation study to verify the performance of LCS when applied to roadway sensor networks for safety warning. The results indicate that LCS is fair to all sensors. We conclude that roadway safety warning based on sensor networks is a promising idea for realizing ITS's “Zero Fatality, Zero Delay” roadway safety philosophy. 相似文献