In this article we study thetabu search (TS) method in an application for solving an important class of scheduling problems. Tabu search is characterized by integrating artificial intelligence and optimization principles, with particular emphasis on exploiting flexible memory structures, to yield a highly effective solution procedure. We first discuss the problem of minimizing the sum of the setup costs and linear delay penalties when N jobs, arriving at time zero, are to be scheduled for sequential processing on a continuously available machine. A prototype TS method is developed for this problem using the common approach of exchanging the position of two jobs to transform one schedule into another. A more powerful method is then developed that employs insert moves in combination with swap moves to search the solution space. This method and the best parameters found for it during the preliminary experimentation with the prototype procedure are used to obtain solutions to a more complex problem that considers setup times in addition to setup costs. In this case, our procedure succeeded in finding optimal solutions to all problems for which these solutions are known and a better solution to a larger problem for which optimizing procedures exceeded a specified time limit (branch and bound) or reached a memory overflow (branch and bound/dynamic programming) before normal termination. These experiments confirm not only the effectiveness but also the robustness of the TS method, in terms of the solution quality obtained with a common set of parameter choices for two related but different problems. 相似文献
Very low bit-rate video coding has recently become one of the most important areas of image communication and a large variety of applications have already been identified. Since conventional approaches are reaching a saturation point, in terms of coding efficiency, a new generation of video coding techniques, aiming at a deeper “understanding” of the image, is being studied. In this context, image analysis, particularly the identification of objects or regions in images (segmentation), is a very important step. This paper describes a segmentation algorithm based on split and merge. Images are first simplified using mathematical morphology operators, which eliminate perceptually less relevant details. The simplified image is then split according to a quad tree structure and the resulting regions are finally merged in three steps: merge, elimination of small regions and control of the number of regions. 相似文献
Parallel Programming skills may require a long time to acquire. “Think in parallel” is a skill that requires time, effort, and experience. In this work, we propose to facilitate the students’ learning process in parallel programming by using instant messaging. Our aim was to find out whether students’ interaction through instant messaging tools is beneficial for the learning process. In order to do so, we asked several students of an HPC course of the Master’s degree in Computer Science of the University of León to develop a specific parallel application, each of them using a different application program interface: OpenMP, MPI, CUDA, or OpenCL. Even though the used APIs are different, there are common points in the design process. We encouraged students to interact with each other by using Gitter, an instant messaging tool for GitHub users. Our analysis of the communications and results demonstrate that the direct interaction of students through the Gitter tool has a positive impact on the learning process.
Applied Intelligence - The17 Sustainable Development Goals (SDGs) established by the United Nations Agenda 2030 constitute a global blueprint agenda and instrument for peace and prosperity... 相似文献
Neural Computing and Applications - Preserving red-chili quality is of utmost importance in which the authorities demand quality techniques to detect, classify, and prevent it from impurities. For... 相似文献
Neural Computing and Applications - Grapes reception is a key process in wine production. The harvest days are extremely challenging days in managing the reception of the grapes, as the winery... 相似文献
The inversion of schema mappings has been identified as one of the fundamental operators for the development of a general framework for metadata management. During the last few years, three alternative notions of inversion for schema mappings have been proposed (Fagin-inverse (Fagin, TODS 32(4), 25:1–25:53, 2007), quasi-inverse (Fagin et?al., TODS 33(2), 11:1–11:52, 2008), and maximum recovery (Arenas et?al., TODS 34(4), 22:1–22:48, 2009)). However, these notions lack some fundamental properties that limit their practical applicability: most of them are expressed in languages including features that are difficult to use in practice, some of these inverses are not guaranteed to exist for mappings specified with source-to-target tuple-generating dependencies (st-tgds), and it has been futile to search for a meaningful mapping language that is closed under any of these notions of inverse. In this paper, we develop a framework for the inversion of schema mappings that fulfills all of the above requirements. It is based on the notion of ${\mathcal{C}}$ -maximum recovery, for a query language ${\mathcal{C}}$, a notion designed to generate inverse mappings that recover back only the information that can be retrieved with queries in ${\mathcal{C}}$. By focusing on the language of conjunctive queries (CQ), we are able to find a mapping language that contains the class of st-tgds, is closed under CQ-maximum recovery, and for which the chase procedure can be used to exchange data efficiently. Furthermore, we show that our choices of inverse notion and mapping language are optimal, in the sense that choosing a more expressive inverse operator or mapping language causes the loss of these properties. 相似文献
Steganographic techniques allow users to covertly transmit information, hiding the existence of the communication itself. These can be used in several scenarios ranging from evading censorship to discreetly extracting sensitive information from an organization. In this paper, we consider the problem of using steganography through a widely used network protocol (i.e. HTTP). We analyze the steganographic possibilities of HTTP, and propose an active warden model to hinder the usage of covert communication channels. Our framework is meant to be useful in many scenarios. It could be employed to ensure that malicious insiders are not able to use steganography to leak information outside an organization. Furthermore, our model could be used by web servers administrators to ensure that their services are not being abused, for example, as anonymous steganographic mailboxes. Our experiments show that steganographic contents can be successfully eliminated, but that dealing with high payload carriers such as large images may introduce notable delays in the communication process. 相似文献
We focus on two aspects of the face recognition, feature extraction and classification. We propose a two component system, introducing Lattice Independent Component Analysis (LICA) for feature extraction and Extreme Learning Machines (ELM) for classification. In previous works we have proposed LICA for a variety of image processing tasks. The first step of LICA is to identify strong lattice independent components from the data. In the second step, the set of strong lattice independent vector are used for linear unmixing of the data, obtaining a vector of abundance coefficients. The resulting abundance values are used as features for classification, specifically for face recognition. Extreme Learning Machines are accurate and fast-learning innovative classification methods based on the random generation of the input-to-hidden-units weights followed by the resolution of the linear equations to obtain the hidden-to-output weights. The LICA-ELM system has been tested against state-of-the-art feature extraction methods and classifiers, outperforming them when performing cross-validation on four large unbalanced face databases. 相似文献