首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In its current state, the Facebook literature speaks very little about metrics developed specifically for this important social networking service. This study tries to fill part of this gap by conceptualizing, developing and validating a new Facebook use construct, “Gravitating towards Facebook” (GoToFB). Unlike traditional Facebook usage views that have focused on lean usage measures, the one presented in the current study offers a broader conceptualization of the same by taking into account the interaction among three elements: the user, Facebook features, and task. This investigation has put extensive efforts to validate the new Facebook instrument using a series of exploratory and confirmatory scale development techniques and found that it exhibits adequate measurement properties. The resultant scale consists of eight dimensions: connecting, sharing, relaxing, organizing, branding, monitoring, expressing, and learning. The instrument would be useful for businesses looking for deeply understanding how potential customers use Facebook and for researchers seeking to develop and test Facebook success models.  相似文献   

2.
《Computer》2006,39(7):17-19
Radio frequency identification technology is surging in popularity and finding use in a growing number of applications. Despite RFID's growing profile, some researchers worry that potential security problems could cloud the technology's future, particularly as it is used for more critical purposes. There are concerns that hackers could tamper with or steal RFID data, such as product prices or patients' or credit-card holders' private information. RFID is subject to the same complexity-related security problems that have affected the entire IT industry. Sound software-development techniques make RFID more secure.  相似文献   

3.
Smith  M.R. 《Micro, IEEE》1992,12(6):10-23
The characteristics of benchmark digital signal processing (DSP) algorithms are examined. These characteristics are used to suggest the features of an ideal DSP architecture, which is compared to current DSP and reduced instruction set computer (RISC) architectures. Timing comparisons taken from data books and research show that several on-the-market RISCs have a DSP performance close to or better than some DSP chips. Analysis of these DSP and RISC architectures leads to the suggestion of an ideal low-cost RISC DSP chip  相似文献   

4.
5.
6.
We present results on the expressive power of various deductive database languages extended with stratified aggregation. We show that (1) Datalog extended with stratified aggregation cannot express a query to count the number of paths between every pair of nodes in an acyclic graph, (2) Datalog extended with stratified aggregation and arithmetic on integers (the + operator) can express allcomputable queries on ordered domains, and (3) Datalog extended with stratified aggregation and generic function symbols can express allcomputable queries (on ordered or unordered domains). Note that without stratified aggregation, the above extensions of Datalog cannot express all computable queries. We show that replacing stratified aggregation by stratified negation preserves expressiveness. We identify subclasses of the above languages that are complete (can express all, and only the, computable queries).  相似文献   

7.
This paper describes some results of what, to the authors' knowledge, is the largest N-version programming experiment ever performed. The object of this ongoing four-year study is to attempt to determine just how consistent the results of scientific computation really are, and, from this, to estimate accuracy. The experiment is being carried out in a branch of the earth sciences known as seismic data processing, where 15 or so independently developed large commercial packages that implement mathematical algorithms from the same or similar published specifications in the same programming language (Fortran) have been developed over the last 20 years. The results of processing the same input dataset, using the same user-specified parameters, for nine of these packages is reported in this paper. Finally, feedback of obvious flaws was attempted to reduce the overall disagreement. The results are deeply disturbing. Whereas scientists like to think that their code is accurate to the precision of the arithmetic used, in this study, numerical disagreement grows at around the rate of 1% in average absolute difference per 4000 fines of implemented code, and, even worse, the nature of the disagreement is nonrandom. Furthermore, the seismic data processing industry has better than average quality standards for its software development with both identifiable quality assurance functions and substantial test datasets  相似文献   

8.
How trustworthy is trusted computing?   总被引:3,自引:0,他引:3  
《Computer》2003,36(3):18-20
One of the biggest issues facing computer technology today is data security. The problem has gotten worse because users are working with sensitive information more often, while the number of threats is growing and hackers are developing new types of attacks. Because of this, many technology experts advocate development of trusted computing (TC) systems that integrate data security into their core operations, rather than implementing it via add-on applications. The paper discusses cryptographic trusted computing and trusted computing initiatives.  相似文献   

9.
10.
This paper proposes an investigation of the global statistics of synthetic protein networks—a step towards a systemic understanding of their design space. We derive a liquidity index which describes the onset of the phase transition where an ensemble of agents aggregates into a giant cluster. This index captures the influence of both the domain distribution of agents and the binding strengths of their various domains in the limit of infinite populations. In simple cases it is possible to derive an explicit analytical expression of this index, which allows one to compare with simulations, and get a sense of how it transfers to the concrete finite case.  相似文献   

11.
It is shown in this note that for SISO systems under l2 disturbances, when data commute approximately with the shift, the optimal interpolation (over all linear time-varying interpolants) can be approximated by the supremum of the frozen-time Hankel norms. This confirms the intuition that the frozen-time constructed optimal or suboptimal interpolants are in fact nearly optimal when data vary slowly.  相似文献   

12.
In this article, I examine to what extent Computers and Composition: An International Journal for Teachers of Writing is international. My analysis of several aspects of the journal indicates limited international scope. I also discuss two issues important when considering the potential international scope of computers and writing research and practices: the differing uses of computers for writing by different language users and the differing concepts of identity and self in different cultures in relation to writing. I conclude with concrete suggestions for broadening our perspectives on computers and writing and making this journal truly international.  相似文献   

13.
Synthetic emotion   总被引:6,自引:0,他引:6  
New research in affective computing-computing that relates to, arises from, or deliberately influences emotion-aims to give computers skills of emotional intelligence. These skills are crucial for learning and for savvy social interaction. For example, recognizing someone's emotional response is key to sensing if what you have just done or said met with approval or disapproval, interest or boredom, confusion or understanding. Computers are increasingly verbal and have formidable computational abilities, but still they cannot see if the graphics they are displaying delight or bore the viewer. The author considers how computers might get better at recognizing emotion  相似文献   

14.
One of the most important aspects of a Web document is its up-to-dateness or recency. Up-to-dateness is particularly relevant to Web documents because they usually contain content origining from different sources and being refreshed at different dates. Whether a Web document is relevant for a reader depends on the history of its contents and so-called external factors, i.e., the up-to-dateness of semantically related documents.In this paper, we approach automatic management of up-to-dateness of Web documents that are managed by an XML-centric Web content management system. First, the freshness for a single document is computed, taking into account its change history. A document metric estimates the distance between different versions of a document. Second, up-to-dateness of a document is determined based on its own history and the historical evolutions of semantically related documents.  相似文献   

15.

Previous studies have indicated that when interfaces are designed consistently with regards to structure and physical attributes, higher performance and lower error rates are achieved than when interfaces are designed inconsistently. The objective of the current study was to develop a methodology to measure all aspects of computer interface consistency and assess the impact of linguistic inconsistency of interface design on user performance. Based on the background literature, seven factors were identified as affecting overall consistency. Based on this identification, a structured questionnaire of 125 items was developed and a factor analysis was conducted which reduced the number of items in the questionnaire to 94 and identified the following nine factors which contribute to consistency: text structure, general text features, information representation, lexical categories, meaning, user knowledge, text content, communicational attributes and physical attributes. A series of four experiments were conducted with 140 subjects using four different tasks and eight different interface types. The internal reliability of the questionnaire was 0.81, and the inter-rater reliability was 0.75. The instrument effectively identified all of the inconsistencies in interface designs. The instrument can be utilized both as an evaluation and as a design tool for Web-based interfaces.  相似文献   

16.
17.
In the Hilbert space operator formalism of quantum mechanics, a single quantum state, which is represented by a density operator, can be regarded as classical in the sense that it can always be diagonalized. However, a quantum ensemble, which is represented by a family of quantum states together with a probability distribution specifying the probability of the occurrence of each state, cannot be diagonalized simultaneously in generic cases, and possesses intrinsic quantum features as long as the involved quantum states are not commutative. The natural question arises as how to quantify its quantumness. By virtue of a canonical correspondence between quantum ensembles and classical-quantum bipartite states, we propose an intuitive entropic quantity which captures certain quantum features of quantum ensembles, and compare it with that defined as the gap between the Holevo quantity and the accessible information. Implications for quantum cryptography and relations to quantum channel capacities are indicated. Some illustrative examples are worked out.  相似文献   

18.
Previous studies have indicated that when interfaces are designed consistently with regards to structure and physical attributes, higher performance and lower error rates are achieved than when interfaces are designed inconsistently. The objective of the current study was to develop a methodology to measure all aspects of computer interface consistency and assess the impact of linguistic inconsistency of interface design on user performance. Based on the background literature, seven factors were identified as affecting overall consistency. Based on this identification, a structured questionnaire of 125 items was developed and a factor analysis was conducted which reduced the number of items in the questionnaire to 94 and identified the following nine factors which contribute to consistency: text structure, general text features, information representation, lexical categories, meaning, user knowledge, text content, communicational attributes and physical attributes. A series of four experiments were conducted with 140 subjects using four different tasks and eight different interface types. The internal reliability of the questionnaire was 0.81, and the inter-rater reliability was 0.75. The instrument effectively identified all of the inconsistencies in interface designs. The instrument can be utilized both as an evaluation and as a design tool for Web-based interfaces.  相似文献   

19.
20.
Abstract

The introduction of standards will hopefully ensure that users can access particular computer resources through a communications network for their own purposes without major problems. The International Standards Organisation (ISO) has developed a seven-layer reference model which is to be used for the purpose of incorporating standards relating to the interconnection of open systems (OSI). It is important that the human factors requirements are considered in relation to this model if the aim of generality of use is to be achieved. This paper considers some of the major human factors requirements and describes an approach to translating them into design standards which can be implemented. The approach starts from a consideration of user activity and develops into a language interface which could reside in layers of the reference model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号