首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
《Computer Networks》2003,41(6):687-706
Several routing algorithms for mobile ad hoc networks have been proposed in the recent past [Broch et al., The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks, Internet Draft draft-ietf-manet-dsr-03.txt, October 1999; Perkins et al., Ad Hoc On Demand Distance Vector (AODV) Routing, Internet Draft draft-ietf-manet-aodv-04.txt, October 1999; Haas and Pearlman, The Zone Routing Protocol (ZRP) for Ad Hoc Networks, Internet Draft draft-zone-routing-protocol-01.txt, August 1998; IEEE J. Select. Areas Commun. 17 (8) (1999) 1454]. With the exception of a few, these protocols (i) involve all nodes in the route management process, (ii) rely on the use of broadcast relays for route computation, and (iii) are primarily reactive in nature. Related work [Broch et al., Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols, Proceedings of IEEE MOBICOM, Dallas, TX, October 1998; Johansson et al., Scenario-based Performance Analysis of Routing Protocols for Mobile Ad-hoc Networks, Proceedings of IEEE MOBICOM, Seattle, August 1999] has shown that the capacity utilization in ad hoc networks decreases significantly when broadcast relays or “broadcast storms” are performed frequently. This effect is compounded when all nodes in the network take part in the route computation.We propose and study an approach based on overlaying a virtual infrastructure (adaptation of the core, proposed in [IEEE J. Select. Areas Commun. 17 (8) (1999) 1454]) on an ad hoc network and operating routing protocols over the infrastructure. The core enables routing protocols to use only a subset of nodes in the network for route management and avoid the use of broadcast relays. We evaluate the performance of dynamic source routing (DSR) [Broch et al., The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks, Internet Draft draft-ietf-manet-dsr-03.txt, October 1999] and AODV [Perkins et al., Ad Hoc On Demand Distance Vector (AODV) Routing, Internet Draft draft-ietf-manet-aodv-04.txt, October 1999], when they are operated over the core and compare their performances against those of their basic versions. Through extensive simulations using ns-2 [Fall and Vardhan, ns notes and documentation, available from http://www-mash.cs.berkeley.edu/ns/, 1999], we show that using a virtual infrastructure significantly improves the performance of both DSR and AODV, in terms of data delivery and routing overhead, under varied network characteristics.  相似文献   

2.
Very recently, Pan et al. [Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO07, pp. 126–33] presented a new and novel discrete differential evolution algorithm for the permutation flowshop scheduling problem with the makespan criterion. On the other hand, the iterated greedy algorithm is proposed by [Ruiz, R., & Stützle, T. (2007). A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. European Journal of Operational Research, 177(3), 2033–49] for the permutation flowshop scheduling problem with the makespan criterion. However, both algorithms are not applied to the permutation flowshop scheduling problem with the total flowtime criterion. Based on their excellent performance with the makespan criterion, we extend both algorithms in this paper to the total flowtime objective. Furthermore, we propose a new and novel referenced local search procedure hybridized with both algorithms to further improve the solution quality. The referenced local search exploits the space based on reference positions taken from a reference solution in the hope of finding better positions for jobs when performing insertion operation. Computational results show that both algorithms with the referenced local search are either better or highly competitive to all the existing approaches in the literature for both objectives of makespan and total flowtime. Especially for the total flowtime criterion, their performance is superior to the particle swarm optimization algorithms proposed by [Tasgetiren, M. F., Liang, Y. -C., Sevkli, M., Gencyilmaz, G. (2007). Particle swarm optimization algorithm for makespan and total flowtime minimization in permutation flowshop sequencing problem. European Journal of Operational Research, 177(3), 1930–47] and [Jarboui, B., Ibrahim, S., Siarry, P., Rebai, A. (2007). A combinatorial particle swarm optimisation for solving permutation flowshop problems. Computers & Industrial Engineering, doi:10.1016/j.cie.2007.09.006]. Ultimately, for Taillard’s benchmark suite, four best known solutions for the makespan criterion as well as 40 out of the 90 best known solutions for the total flowtime criterion are further improved by either one of the algorithms presented in this paper.  相似文献   

3.
Many organizations rely on relational database platforms for OLAP-style querying (aggregation and filtering) for small to medium size applications. We investigate the impact of scaling up the data sizes for such queries. We intend to illustrate what kind of performance results an organization could expect should they migrate current applications to big data environments. This paper benchmarks the performance of Hive (Thusoo et al., 2009)  [9], a parallel data warehouse platform that is a part of the Hadoop software stack. We set up a 4-node Hadoop cluster using Hortonworks HDP 1.3.2 (Hortonworks HDP 1.3.2). We use the data generator provided by the TPC-DS benchmark (DSGen v1.1.0) to generate data of different scales. We compare the performance of loading data and querying for SQL and Hive Query Language (HiveQL) on a relational database installation (MySQL) and on a Hive cluster, respectively. We measure the speedup for query execution for three dataset sizes resulting from the scale up. Hive loads the large datasets faster than MySQL, while it is marginally slower than MySQL when loading the smaller datasets. Query execution in Hive is also faster. We also investigate executing Hive queries concurrently in workloads and conclude that serial execution of queries is a much better practice for clusters with limited resources.  相似文献   

4.
In this note we sketch the proof of desingularization over fields of characteristic zero in [Encinas, S., Villamayor, O., A new theorem of desingularization over fields of characteristic zero, Preprint: arXiv:math.AG/0101208]; and we refer to [Bravo, A., Encinas, S., Villamayor, O., A simplified proof of desingularization and applications. Revista Matemática Iberomericana (in press)], or [Villamayor, O., Desingularization: An introduction for the young algebraist (Notes)], for full details. This proof, defined in terms of an algorithm, provides a conceptual simplification over that of Hironaka. In fact Hironaka’s notions of standard basis, Hilbert Samuel functions, and normal flatness are avoided.  相似文献   

5.
Conventional studies on buffer-constrained flowshop scheduling problems have considered applications with a limitation on the number of jobs that are allowed in the intermediate storage buffer before flowing to the next machine. The study in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a) considered a two-machine flowshop problem with ??processing time-dependent?? buffer constraints for multimedia applications. A???passive?? prefetch model (the PP-problem), in which the download process is suspended unless the buffer is sufficient for keeping an incoming media object, was applied in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a). This study further considers an ??active?? prefetch model (the AP-problem) that exploits the unoccupied buffer space by advancing the download of the incoming object by a computed maximal duration that possibly does not cause a buffer overflow. We obtain new complexity results for both problems. This study also proposes a new lower bound which improves the branch and bound algorithm presented in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a). For the PP-problem, compared to the lower bounds developed in Lin et al. (Comput. Oper. Res. 36(4):1158?C1175, 2008a), on average, the results of the simulation experiments show that the proposed new lower bound cuts about 38% of the nodes and 32% of the execution time for searching the optimal solutions.  相似文献   

6.

According to the NMC Horizon Report (Johnson et al. in Horizon Report Europe: 2014 Schools Edition, Publications Office of the European Union, The New Media Consortium, Luxembourg, Austin, 2014 [1]), data-driven learning in combination with emerging academic areas such as learning analytics has the potential to tailor students’ education to their needs (Johnson et al. 2014 [1]). Focusing on this aim, this article presents a web-based (training) platform for German-speaking users aged 8–12.Our objective is to support primary-school pupils—especially those who struggle with the acquisition of the German orthography—with an innovative tool to improve their writing and spelling competencies. On this platform, which is free of charge, they can write and publish texts supported by a special feature, called the intelligent dictionary. It gives automatic feedback for correcting mistakes that occurred in the course of fulfilling a meaningful writing task. Consequently, pupils can focus on writing texts and are able to correct texts on their own before publishing them. Additionally, they gain deeper insights in German orthography. Exercises will be recommended for further training based on the spelling mistakes that occurred. This article covers the background to German orthography and its teaching and learning as well as details concerning the requirements for the platform and the user interface design. Further, combined with learning analytics we expect to gain deeper insight into the process of spelling acquisition which will support optimizing our exercises and providing better materials in the long run.

  相似文献   

7.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

8.
Complex activities, e.g. pole vaulting, are composed of a variable number of sub-events connected by complex spatio-temporal relations, whereas simple actions can be represented as sequences of short temporal parts. In this paper, we learn hierarchical representations of activity videos in an unsupervised manner. These hierarchies of mid-level motion components are data-driven decompositions specific to each video. We introduce a spectral divisive clustering algorithm to efficiently extract a hierarchy over a large number of tracklets (i.e. local trajectories). We use this structure to represent a video as an unordered binary tree. We model this tree using nested histograms of local motion features. We provide an efficient positive definite kernel that computes the structural and visual similarity of two hierarchical decompositions by relying on models of their parent–child relations. We present experimental results on four recent challenging benchmarks: the High Five dataset (Patron-Perez et al., High five: recognising human interactions in TV shows, 2010), the Olympics Sports dataset (Niebles et al., Modeling temporal structure of decomposable motion segments for activity classification, 2010), the Hollywood 2 dataset (Marszalek et al., Actions in context, 2009), and the HMDB dataset (Kuehne et al., HMDB: A large video database for human motion recognition, 2011). We show that per-video hierarchies provide additional information for activity recognition. Our approach improves over unstructured activity models, baselines using other motion decomposition algorithms, and the state of the art.  相似文献   

9.
10.
Cover Image: Foreground image taken from Tongzhong Ju et al., (pp. 618–631): IgA1 has normalO-glycans with a core 1 O-glycan decorated with sialic acids (purple diamonds) in either mono- or disiaylated structures. But in IgA1 from IgA nephropathy (IgAN) patients it is proposed that Tn and STn antigens lacking the core 1 O-glycan accumulate and contribute to IgAN. IgAN is the most common glomerulonephritis and results in renal failure in 20–40% of patients over 25 years of age. Deposition of IgA1 in the glomerular mesangium is thought to drive the disease. Background image, kindly provided by Tongzhong Ju, Richard D. Cummings, et al.: Human primary colorectal adenocarcinoma cells highly express Tn antigen. A formalin fixed paraffin-embedded tissue section of human primary colorectal adenocarcinoma was immunohistochemically stained with anti-Tn monoclonal antibody (Red) and counter staining with Haemotoxylin (Blue). The tumor cells are highly stained with anti-TnmAb. Cover design by SCHULZGrafik-Design.

  相似文献   


11.
Flutter shutter (coded exposure) cameras allow to extend indefinitely the exposure time for uniform motion blurs. Recently, Tendero et al. (SIAM J Imaging Sci 6(2):813–847, 2013) proved that for a fixed known velocity v the gain of a flutter shutter in terms of root means square error (RMSE) cannot exceeds a 1.1717 factor compared to an optimal snapshot. The aforementioned bound is optimal in the sense that this 1.1717 factor can be attained. However, this disheartening bound is in direct contradiction with the recent results by Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013). Our first goal in this paper is to resolve mathematically this discrepancy. An interesting question was raised by the authors of reference (IEEE Trans Image Process 22(2), 447–458, 2013). They state that the “gain for computational imaging is significant only when the average signal level J is considerably smaller than the read noise variance \(\sigma _r^2\)” (Cossairt et al., IEEE Trans Image Process 22(2), 447–458, 2013, p. 5). In other words, according to Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013) a flutter shutter would be more efficient when used in low light conditions e.g., indoor scenes or at night. Our second goal is to prove that this statement is based on an incomplete camera model and that a complete mathematical model disproves it. To do so we propose a general flutter shutter camera model that includes photonic, thermal (The amplifier noise may also be mentioned as another source of background noise, which can be included w.l.o.g. in the thermal noise) and additive [The additive (sensor readout) noise may contain other components such as reset noise and quantization noise. We include them w.l.o.g. in the readout.] (sensor readout, quantification) noises of finite variances. Our analysis provides exact formulae for the mean square error of the final deconvolved image. It also allows us to confirm that the gain in terms of RMSE of any flutter shutter camera is bounded from above by 1.1776 when compared to an optimal snapshot. The bound is uniform with respect to the observation conditions and applies for any fixed known velocity. Incidentally, the proposed formalism and its consequences also apply to e.g., the Levin et al. motion-invariant photography (ACM Trans Graphics (TOG), 27(3):71:1–71:9, 2008; Method and apparatus for motion invariant imag- ing, October 1 2009. US Patent 20,090,244,300, 2009) and variant (Cho et al. Motion blur removal with orthogonal parabolic exposures, 2010). In short, we bring mathematical proofs to the effect of contradicting the claims of Cossairt et al. (IEEE Trans Image Process 22(2), 447–458, 2013). Lastly, this paper permits to point out the kind of optimization needed if one wants to turn the flutter shutter into a useful imaging tool.  相似文献   

12.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

13.
We prove a lower bound of d1−o(1) on the query time for any deterministic algorithms that solve approximate nearest neighbor searching in Yao's cell probe model. Our result greatly improves the best previous lower bound for this problem, which is [A. Chakrabarti et al., in: Proc. 31st Ann. ACM Symp. Theory of Computing, 1999, pp. 305-311]. Our proof is also much simpler than the proof of A. Chakrabarti et al.  相似文献   

14.
Phononic crystals (PnC) with a specifically designed liquid-filled defect have been recently introduced as a novel sensor platform (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012). Sensors based on this principle feature a band gap covering the typical input span of the measurand as well as a narrow transmission peak within the band gap where the frequency of maximum transmission is governed by the measurand. This approach has been applied for determination of volumetric properties of liquids (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012; Oseev et al. in Sens Actuators B Chem 189:208–212, 2013; Lucklum and Li in Meas Sci Technol 20(12):124014, 2009) and has demonstrated attractive sensitivity. One way to improve sensitivity requires higher probing frequencies in the range of 100 MHz and above. In this range surface acoustic wave (SAW) devices are an established basis for sensors. We have performed first tests towards a PnC microsensors (Lucklum et al. in Towards a SAW based phononic crystal sensor platform. In: 2013 Joint European frequency and time forum and international frequency control symposium (EFTF/IFC), pp 69–72, 2013). The respective feature size of the PnC SAW sensor has dimensions in the range of 10 µm and below. Whereas those dimensions are state of the art for common MEMS materials, etching of holes and cavities in piezoelectric materials that have an aspect ratio diameter/depth is still challenging. In this contribution we describe an improved technological process able to realize considerably deep and uniform holes in a SAW substrate.  相似文献   

15.
Subexponential algorithms for partial cover problems   总被引:1,自引:0,他引:1  
Partial Cover problems are optimization versions of fundamental and well-studied problems like Vertex Cover and Dominating Set. Here one is interested in covering (or dominating) the maximum number of edges (or vertices) using a given number k of vertices, rather than covering all edges (or vertices). In general graphs, these problems are hard for parameterized complexity classes when parameterized by k. It was recently shown by Amini et al. (2008) [1] that Partial Vertex Cover and Partial Dominating Set are fixed parameter tractable on large classes of sparse graphs, namely H-minor-free graphs, which include planar graphs and graphs of bounded genus. In particular, it was shown that on planar graphs both problems can be solved in time 2O(k)nO(1).During the last decade there has been an extensive study on parameterized subexponential algorithms. In particular, it was shown that the classical Vertex Cover and Dominating Set problems can be solved in subexponential time on H-minor-free graphs. The techniques developed to obtain subexponential algorithms for classical problems do not apply to partial cover problems. It was left as an open problem by Amini et al. (2008) [1] whether there is a subexponential algorithm for Partial Vertex Cover and Partial Dominating Set. In this paper, we answer the question affirmatively by solving both problems in time not only on planar graphs but also on much larger classes of graphs, namely, apex-minor-free graphs. Compared to previously known algorithms for these problems our algorithms are significantly faster and simpler.  相似文献   

16.
17.
18.
Consider a random graph model where each possible edge e is present independently with some probability p e . Given these probabilities, we want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most t i times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LP-rounding based constant-factor approximation algorithms for these problems. Our main results are the following:
  • We give a 4 approximation for weighted stochastic matching on general graphs, and a 3 approximation on bipartite graphs. This answers an open question from Chen et al. (ICALP’09, LNCS, vol. 5555, pp. 266–278, [2009]).
  • We introduce a generalization of the stochastic online matching problem (Feldman et al. in FOCS’09, pp. 117–126, [2009]) that also models preference-uncertainty and timeouts of buyers, and give a constant factor approximation algorithm.
  相似文献   

19.
20.
Recommender systems are essential in mobile commerce to benefit both companies and individuals by offering highly personalized products and services. One key pre-requirement of applying such systems is to gain decent knowledge about each individual consumer through user profiling. However, most existing profiling approaches on mobile suffer problems such as non-real-time, intrusive, cold-start, and non-scalable, which prevents them from being adopted in reality. To tackle the problems, this work developed real-time machine-learning models to predict user profiles of smartphone users from openly accessible data, i.e. app installation logs. Results from a study with 904 participants showed that the models are able to predict interests on average 48.81% better than a random guess in terms of precision and 13.80% better in terms of recall. Since the effectiveness of such predictive models is unknown in practice, the predictive models were evaluated in a large-scale field experiment with 73,244 participants. Results showed that by leveraging our models, personalized mobile recommendations can be enabled and the corresponding click-through-rate can be improved by up to 228.30%. Supplementary information, study data, and software can be found at https://www.autoidlabs.ch/mobile-analytics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号