共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses and explores issues surrounding current approaches to the design of technological products and offers
two critical design proposals for presentation and debate. Primarily driven by contemporary theoretical writings and thoughts
on the subject of ‘technology’ and ‘simulation’, currently being offered by leading thinkers on these subjects and expressed
as ‘technological objects’; they are the result of a critical investigation into the emerging design issues surrounding ‘interaction’
and ‘transparency’. By using ‘popular’ language of product design as a vehicle, they exist as ‘cultural offerings’ exploring
an alternative future for technological products not necessarily governed by science and economics. 相似文献
2.
Atsuyoshi Nakamura Jun-ichi Takeuchi Naoki Abe 《Annals of Mathematics and Artificial Intelligence》1998,23(1-2):53-82
We consider a variant of the ‘population learning model’ proposed by Kearns and Seung [8], in which the learner is required
to be ‘distribution-free’ as well as computationally efficient. A population learner receives as input hypotheses from a large
population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample
for the target concept and output a hypothesis. A polynomial time population learner is said to PAC-learn a concept class,
if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial,
even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies,
and some simple concept classes that can be learned by them. These strategies include the ‘supremum hypothesis finder’, the
‘minimum superset finder’ (a special case of the ‘supremum hypothesis finder’), and various voting schemes. When coupled with
appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the ‘high–low game’,
conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these
cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model [11], with appropriate
choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively
a model of ‘population prediction’, in which the learner is to predict the value of the target concept at an arbitrarily drawn
point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning
model is strictly more powerful than the population prediction model. Finally, we consider a variant of this model with classification
noise, and exhibit a population learner for the class of conjunctions in this model.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
3.
Experimental research with humans and animals suggests that sleep — particularly REM sleep — is, in some way, associated with
learning. However, the nature of the association and the underlying mechanism remain unclear. A number of theoretical models
have drawn inspiration from research into Artificial Neural Networks. Crick and Mitchinson's ‘unlearning’ and Robins and McCallum's
‘pseudo-rehearsal’ models suggest alternative mechanisms through which sleep could contribute to learning. In this paper we
present simulations, suggesting a possible synthesis. Our simulations use a modified version of a Hopfield network to model
the possible contribution of sleep to memory consolidation. Sleep is simulated by removing all sensory input to the network
and by exposing it to a ‘noise’, intended as a highly abstract model of the signals generated by the Ponto-geniculate-occipital
system during sleep. The results show that simulated sleep does indeed contribute to learning and that the relationship between
the observed effect and the length of simulated sleep can be represented by a U-shaped curve. It is shown that while high-amplitude,
low-frequency noise (reminiscent of NREM sleep) leads to a general reinforcement of memory, low-amplitude, high-frequency
noise (as observed in REM sleep) leads to ‘forgetting’ of all but the strongest memory traces. This suggests that a combination
of the two kinds of sleep might produce a stronger effect than either kind of sleep on its own and that effective consolidation
of memory during sleep may depend not just on REM or NREM sleep but on the overall dynamics of the sleep cycle. 相似文献
4.
In this paper we reflect on a body of work to develop a simpler form of digital photography. We give three examples of ‘Less
is More’ thinking in this area which are directed and inspired by naturalistic user behaviours and reactions to design ideas.
Each example happens to review the place of an old technology in the new scheme of things, and challenges a technological
trend in the industry. Hence, we consider the role of sound in photography to recommend audiophotographs rather than short
video clips as a new media form. We look again at the role of paper in photo sharing and recommend its support and augmentation
against the trend towards screen-based viewing. Finally, we consider the role of physical souvenirs and memorabilia alongside
photographs, to recommend their use as story triggers and containers, in contrast to explicit multimedia presentations. The
implications for simple computing are discussed.
This paper originated from the International Forum ‘Less is More—Simple computing in an age of complexity’, 27–28 April 2005,
Cambridge UK. 相似文献
5.
V. P. Kharbanda 《AI & Society》2002,16(1-2):89-99
In the present scenario of globalisation, knowledge has become the prime factor of production for competitive advantage.
This calls for acquisition and utilisation of knowledge for innovation and technical change on a constant basis, which is
only possible in a ‘learning organisation’. Innovative activities of a learning organisation are influenced by three main
factors: (1) internal learning; (2) external learning; and (3) the innovation strategies decided upon by the enterprise management.
An assumption has been made that, particularly in developing countries, absorption and adaptation of technologies, i.e. indigenisation,
take place through a process of ‘learning by doing’. Taking this into consideration, this paper focuses on a few case studies
carried out at NISTADS, New Delhi, India, on small enterprises in the formal as well as traditional sectors, highlighting
the learning process in an organisational context and how it brings in innovation and technological change at enterprise level.
The study demonstrates that the learning environment in an organisational context is an indispensable process to be innovative
and building up capabilities for technological change. This in turn also calls for strong networking of the enterprises with
academia, R&D institutions and other enterprises, to create knowledge clusters. This builds up a strong case for a network
approach of learning organisations not only at the regional level but also at the cross-cultural level for constant innovation
and technical change. 相似文献
6.
In this paper, we demonstrate how craft practice in contemporary jewellery opens up conceptions of ‘digital jewellery’ to
possibilities beyond merely embedding pre-existing behaviours of digital systems in objects, which follow shallow interpretations
of jewellery. We argue that a design approach that understands jewellery only in terms of location on the body is likely to
lead to a world of ‘gadgets’, rather than anything that deserves the moniker ‘jewellery’. In contrast, by adopting a craft
approach, we demonstrate that the space of digital jewellery can include objects where the digital functionality is integrated
as one facet of an object that can be personally meaningful for the holder or wearer. 相似文献
7.
Panasonic initiated a reform strategy called “Value Creation 21” in 2001. This strategy had a strong impact on its transaction
relationships. This research covers one of the important issues in analyzing how the transaction network in Panasonic has
changed during the period of “Value Creation 21.” In order to make Panasonic’s transaction relationships visible and countable,
we have introduced graph theory and a measure centrality index from the viewpoints of degree, closeness, and betweenness by
using data collected in 2002 and 2005. Our findings are reported here. First, the number of firms in Panasonic’s transaction
network in 2005 was smaller than in 2002. Second, not only the degree, but also the closeness and betweenness, of the main
firms in the Panasonic Group and their suppliers decreased a little more in 2005. Third, the number of in-degree firms declined,
whereas the relative importance of Panasonic in the transaction network was more significant. Fourth, Panasonic’s affiliated
firms in components & devices and the digital AVC network domain ranked higher than other firms in the transaction network.
Last, its out-degree suppliers dropped more in 2005 than in 2002. With these findings, we finally concluded how Panasonic
arranged its transaction relationships during the turnaround. 相似文献
8.
Frans-Willem Korsten 《AI & Society》2012,27(1):13-23
Apostrophe is best known as a punctuation mark (’) or as a key poetic figure (with a speaker addressing an imaginary or absent
person or entity). In origin, however, it is a pivotal rhetorical figure that indicates a ‘breaking away’ or turning away of the speaker from one addressee to another, in a different mode.
In this respect, apostrophe is essentially theatrical. To be sure, the turn away implies two different modes of address that
may follow upon one another, as is hinted at by the two meanings of the verb ‘to witness’: being a witness and bearing witness.
One cannot do both at the same time. My argument will be, however, that in order to make witnessing work ethically and responsibly,
the two modes of address must take place simultaneously, in the coincidence of two modalities of presence: one actual and
one virtual. Accordingly, I will distinguish between an address of attention and an address of expression. Whereas the witness is actually
paying attention to that which she witnesses, she is virtually (and in the sense Deleuze intended, no less really) turning away in terms of expression. The two come together in what Kelly Oliver called the ‘inner witness’. The simultaneous
operation of two modes of address suggests that Caroline Nevejan’s so-called YUTPA model would have to include two modalities
of ‘you’. Such a dual modality has become all the more important, in the context of the society of the spectacle. One text
will help me first to explore two modes of address through apostrophe. I will focus on a story by Dutch author Maria Derm?ut,
written in the fifties of the twentieth century, reflecting on an uprising and the subsequent execution of its leader in the
Dutch Indies in 1817. Secondly, I will move to American artist Kara Walker’s response, in the shape of an installation and
a visual essay, to the flooding of New Orleans in 2005. The latter will serve to illustrate a historic shift in the theatrical
nature and status of ‘presence’ in the two modes of address. Instead of thinking of the convergence of media, of which Jenkins
speaks, we might think of media swallowing up one another. For instance, the theatrical structure of apostrophe is swallowed
up, and in a sense perverted, by the model of the spectacle in modern media. This endangers the very possibility of witnessing
in any ethical sense of the word. 相似文献
9.
Manipulatives—physical learning materials such as cubes or tiles—are prevalent in educational settings across cultures and
have generated substantial research into how actions with physical objects may support children’s learning. The ability to
integrate digital technology into physical objects—so-called ‘digital manipulatives’—has generated excitement over the potential
to create new educational materials. However, without a clear understanding of how actions with physical materials lead to
learning, it is difficult to evaluate or inform designs in this area. This paper is intended to contribute to the development
of effective tangible technologies for children’s learning by summarising key debates about the representational advantages
of manipulatives under two key headings: offloading cognition—where manipulatives may help children by freeing up valuable cognitive resources during problem solving, and conceptual metaphors—where perceptual information or actions with objects have a structural correspondence with more symbolic concepts. The review
also indicates possible limitations of physical objects—most importantly that their symbolic significance is only granted
by the context in which they are used. These arguments are then discussed in light of tangible designs drawing upon the authors’
current research into tangibles and young children’s understanding of number. 相似文献
10.
Vishwanathan Mohan Pietro Morasso Jacopo Zenzeri Giorgio Metta V. Srinivasa Chakravarthy Giulio Sandini 《Autonomous Robots》2011,31(1):21-53
The core cognitive ability to perceive and synthesize ‘shapes’ underlies all our basic interactions with the world, be it
shaping one’s fingers to grasp a ball or shaping one’s body while imitating a dance. In this article, we describe our attempts
to understand this multifaceted problem by creating a primitive shape perception/synthesis system for the baby humanoid iCub.
We specifically deal with the scenario of iCub gradually learning to draw or scribble shapes of gradually increasing complexity,
after observing a demonstration by a teacher, by using a series of self evaluations of its performance. Learning to imitate
a demonstrated human movement (specifically, visually observed end-effector trajectories of a teacher) can be considered as
a special case of the proposed computational machinery. This architecture is based on a loop of transformations that express
the embodiment of the mechanism but, at the same time, are characterized by scale invariance and motor equivalence. The following
transformations are integrated in the loop: (a) Characterizing in a compact, abstract way the ‘shape’ of a demonstrated trajectory
using a finite set of critical points, derived using catastrophe theory: Abstract Visual Program (AVP); (b) Transforming the
AVP into a Concrete Motor Goal (CMG) in iCub’s egocentric space; (c) Learning to synthesize a continuous virtual trajectory similar to the demonstrated shape using the discrete set of critical points defined in CMG; (d) Using the virtual trajectory as an attractor for iCub’s internal body model, implemented by the Passive Motion Paradigm which includes a forward and an
inverse motor model; (e) Forming an Abstract Motor Program (AMP) by deriving the ‘shape’ of the self generated movement (forward
model output) using the same technique employed for creating the AVP; (f) Comparing the AVP and AMP in order to generate an
internal performance score and hence closing the learning loop. The resulting computational framework further combines three
crucial streams of learning: (1) motor babbling (self exploration), (2) imitative action learning (social interaction) and
(3) mental simulation, to give rise to sensorimotor knowledge that is endowed with seamless compositionality, generalization
capability and body-effectors/task independence. The robustness of the computational architecture is demonstrated by means
of several experimental trials of gradually increasing complexity using a state of the art humanoid platform. 相似文献
11.
Daniel S. Yeung Defeng Wang Wing W. Y. Ng Eric C. C. Tsang Xizhao Wang 《Machine Learning》2007,68(2):171-200
This paper proposes a new large margin classifier—the structured large margin machine (SLMM)—that is sensitive to the structure
of the data distribution. The SLMM approach incorporates the merits of “structured” learning models, such as radial basis
function networks and Gaussian mixture models, with the advantages of “unstructured” large margin learning schemes, such as
support vector machines and maxi-min margin machines. We derive the SLMM model from the concepts of “structured degree” and
“homospace”, based on an analysis of existing structured and unstructured learning models. Then, by using Ward’s agglomerative
hierarchical clustering on input data (or data mappings in the kernel space) to extract the underlying data structure, we
formulate SLMM training as a sequential second order cone programming. Many promising features of the SLMM approach are illustrated,
including its accuracy, scalability, extensibility, and noise tolerance. We also demonstrate the theoretical importance of
the SLMM model by showing that it generalizes existing approaches, such as SVMs and M4s, provides novel insight into learning models, and lays a foundation for conceiving other “structured” classifiers.
Editor: Dale Schuurmans.
This work was supported by the Hong Kong Research Grant Council under Grants G-T891 and B-Q519. 相似文献
12.
Maria Nazaré Munari Angeloni Hahne Fernando Mendes de Azevedo 《Neural computing & applications》2008,17(1):65-74
This paper presents a methodology that uses evolutionary learning in training ‘A’ model networks, a topology based on Interactive
Activation and Competition (IAC) neural networks. IAC networks show local knowledge and processing units clustered in pools.
The connections among units may assume only 1, 0 or −1. On the other hand, ‘A’ model network uses values in interval [−1,
1]. This feature provides a wider range of applications for this network, including problems which do not show mutually exclusive
concepts. However, there is no algorithm to adjust the network weights and still preserve the desired characteristics of the
original network. Accordingly, we propose the use of genetic algorithms in a new methodology to obtain the correct weight
set for this network. Two examples are used to illustrate the proposed method. Findings are considered consistent and generic
enough to allow further applications on similar classes of problems suitable for ‘A’ model IAC Networks. 相似文献
13.
Yonglong Wang Bill Samson David Ellison Louis Natanson 《Neural computing & applications》2001,10(3):253-263
Active learning balances the cost of data acquisition against its usefulness for training. We select only those data points
which are the most informative about the system being modelled. The MIQR (Maximum Inter-Quartile Range) criterion is defined
by computing the inter-quartile range of the outputs of an ensemble of networks, and finding the input parameter values for
which this is maximal. This method ensures data selection is not unduly influenced by ‘outliers’, but is principally dependent
upon the ‘mainstream’ state of the ensemble. MIQR is more effective and efficient than contending methods1 . The algorithm automatically regulates the training threshold and the network architecture as necessary. We compare active
learning methods by applying them to a continuous function and a discontinuous function. Training is more difficult for a
discontinuous function than a continuous function, and the volume of data for active learning is substantially less than for
passive learning. 相似文献
14.
Stuart Jackson Nuala Brady Fred Cummins Kenneth Monaghan 《Artificial Intelligence Review》2006,26(1-2):141-154
Recent findings in neuroscience suggest an overlap between those brain regions involved in the control and execution of movement
and those activated during the perception of another’s movement. This so called ‘mirror neuron’ system is thought to underlie
our ability to automatically infer the goals and intentions of others by observing their actions. Kilner et al. (Curr Biol
13(6):522–525, 2003) provide evidence for a human ‘mirror neuron’ system by showing that the execution of simple arm movements
is affected by the simultaneous perception of another’s movement. Specifically, observation of ‘incongruent’ movements made
by another human, but not by a robotic arm, leads to greater variability in the movement trajectory than observation of movements
in the same direction. In this study we ask which aspects of the observed motion are crucial to this interference effect by
comparing the efficacy of real human movement to that of sparse ‘point-light displays’. Eight participants performed whole
arm movements in both horizontal and vertical directions while observing either the experimenter or a virtual ‘point-light’
figure making arm movements in the same or in a different direction. Our results, however, failed to show an effect of ‘congruency’
of the observed movement on movement variability, regardless of whether a human actor or point-light figure was observed.
The findings are discussed, and future directions for studies of perception-action coupling are considered. 相似文献
15.
Ting Wang Jochem Vonk Benedikt Kratz Paul Grefen 《Distributed and Parallel Databases》2008,23(3):235-270
Transactions have been around since the Seventies to provide reliable information processing in automated information systems.
Originally developed for simple ‘debit-credit’ style database operations in centralized systems, they have moved into much
more complex application domains including aspects like distribution, process-orientation and loose coupling. The amount of
published research work on transactions is huge and a number of overview papers and books already exist. A concise historic
analysis providing an overview of the various phases of development of transaction models and mechanisms in the context of
growing complexity of application domains is still missing, however. To fill this gap, this paper presents a historic overview
of transaction models organized in several ‘transaction management eras’, thereby investigating numerous transaction models
ranging from the classical flat transactions, via advanced and workflow transactions to the Web Services and Grid transaction
models. The key concepts and techniques with respect to transaction management are investigated. Placing well-known research
efforts in historical perspective reveals specific trends and developments in the area of transaction management. As such,
this paper provides a comprehensive, structured overview of developments in the area. 相似文献
16.
David Martin Jacki O’neill Dave Randall Mark Rouncefield 《Computer Supported Cooperative Work (CSCW)》2007,16(3):231-264
As a comparatively novel but increasingly pervasive organizational arrangement, call centres have been a focus for much recent
research. This paper identifies lessons for organizational and technological design through an examination of call centres
and ‘classification work’ – explicating what Star [1992, Systems/Practice vol. 5, pp. 395–410] terms the ‘open black box’. Classification is a central means by which organizations standardize procedure,
assess productivity, develop services and re-organize their business. Nevertheless, as Bowker and Star [1999, Sorting Things Out: Classification and Its Consequences. Cambridge MA: MIT Press] have pointed out, we know relatively little about the work that goes into making classification
schema what they are. We will suggest that a focus on classification ‘work’ in this context is a useful exemplar of the need
for some kind of ‘meta-analysis’ in ethnographic work also. If standardization is a major ambition for organizations under
late capitalism, then comparison might be seen as a related but as-yet unrealized one for ethnographers. In this paper, we
attempt an initial cut at a comparative approach, focusing on classification because it seemed to be the primary issue that
emerged when we compared studies. Moreover, if technology is the principal means through which procedure and practice is implemented
and if, as we believe, classifications are becoming ever more explicitly embedded within it (for instance with the development
of so-called ‘semantic web’ and associated approaches to ontology-based design), then there is clearly a case for identifying
some themes which might underpin classification work in a given domain. 相似文献
17.
Christian Greiffenhagen 《Computer Supported Cooperative Work (CSCW)》2008,17(1):35-62
This paper discusses how a new technology (designed to help pupils with learning about Shakespeare’s Macbeth) is introduced and integrated into existing classroom practices. It reports on the ways through which teachers and pupils
figure out how to use the software as part of their classroom work. Since teaching and learning in classrooms are achieved
in and through educational tasks (what teachers instruct pupils to do) the analysis explicates some notable features of a particular task (storyboarding one
scene from the play). It is shown that both ‘setting the task’ and ‘following the task’ have to be locally and practically
accomplished and that tasks can operate as a sense-making device for pupils’ activities. Furthermore, what the task ‘is’,
is not entirely established through the teacher’s initial formulation, but progressively clarified through pupils’ subsequent
work, and in turn ratified by the teacher.
相似文献
Christian GreiffenhagenEmail: |
18.
Shlomo Djerassi 《Multibody System Dynamics》2012,27(2):173-195
This paper deals with one-point collision with friction in three-dimensional, simple non-holonomic multibody systems. With
Keller’s idea regarding the normal impulse as an independent variable during collision, and with Coulomb’s friction law, the
system equations of motion reduce to five, coupled, nonlinear, first order differential equations. These equations have a
singular point if sticking is reached, and their solution is ‘navigated’ through this singularity in a way leading to either
sticking or sliding renewal in a uniquely defined direction. Here, two solutions are presented in connection with Newton’s,
Poisson’s and Stronge’s classical collision hypotheses. One is based on numerical integration of the five equations. The other,
significantly faster, replaces the integration by a recursive summation. In connection with a two-sled collision problem,
close agreement between the two solutions is obtained with a few summation steps. 相似文献
19.
Alastair Butler 《Journal of Logic, Language and Information》2007,16(3):241-264
This paper develops a semantics with control over scope relations using Vermeulen’s stack valued assignments as information
states. This makes available a limited form of scope reuse and name switching. The goal is to have a general system that fixes
available scoping effects to those that are characteristic of natural language. The resulting system is called Scope Control
Theory, since it provides a theory about what scope has to be like in natural language. The theory is shown to replicate a
wide range of grammatical dependencies, including options for, and constraints on, ‘donkey’, ‘binding’, ‘movement’, ‘Control’
and ‘scope marking’ dependencies. 相似文献
20.
Senaka Fernando Jyoti Choudrie Mark Lycett Sergio de Cesare 《Information Systems Frontiers》2012,14(2):279-299
The UK National Health Service (NHS) is embarking on the largest investment programme in Information Technology (IT). The
National Programme for IT (NPfIT) in the NHS is the biggest civil IT project in the world and seeks to revolutionise the way
care is delivered, drive up quality and make more effective use of resources of the NHS. Despite these high expectations,
the NHS has historically experienced some high profile IT failures and the sponsors of the programme admitted that there remain
a number of critical barriers to the implementation of the programme. Clinicians’ reluctance to accept new IT systems at a
local level is seen to be a major factor in this respect. Focusing on such barriers, this paper reports research that explored
and explained why such reluctance occurs in the NHS. The main contribution of this research derives from the distinctive approach
based on Kelly’s Personal Construct Theory (PCT) to understand the ‘reluctance’. The argument presented in the paper indicates
that such reluctance should be viewed not as deliberate resistance imposed by clinicians, but as their inability of changing
their established group personal constructs related to ISDD activities. Therefore, this paper argues that the means that could
occur to reduce the ‘reluctance’ are creative rather than corrective or normative. The research took place in a NHS Trust
and the paper pays considerable attention to technological, behavioural and clinical perspectives that emerged from the study.
The research was conducted as a case study in a NHS trust and data was collected from two local NHS IT project. The main research
participants in this study were: (a) IT professionals including IT project managers and senior IT managers; and (b) senior
clinicians. 相似文献