Nothing Special   »   [go: up one dir, main page]

Skip to main content
eScholarship
Open Access Publications from the University of California

About

The annual meeting of the Cognitive Science Society is aimed at basic and applied cognitive science research. The conference hosts the latest theories and data from the world's best cognitive science researchers. Each year, in addition to submitted papers, researchers are invited to highlight some aspect of cognitive science.

Invited Research Presentations

Conceptual Change and Other Varieties of Cognitive Development: Some Distinctions in the Emergence of Biological Thought

Conceptual change is generally proposed to occur In one of three ways: 1. constant evolution of new conceptual structures out of older ones such that all traces of the original may eventually disappear. 2. creation of new concepts out of old ones wherein the old remain Intact, and 3. emergence of new concepts from preconceptual states via general learning procedures. The actual incidence of these kinds of change, however, may be overestimated at the expense of two other patterns of cognitive development. One type involves changing access to already present explanatory systems, often through a reframlng of what properties and relations are considered most relevant. Whether to call this process conceptual change is controversial, as the basic systems of explanation may be constant throughout. The other type Involves a range of mechanisms that are not conceptual change in any normal sense of the word. but rather increasing accretion and/or differentiation of knowledge within a highly stable and regular conceptual structure. Not surprisingly, details of all these views depend greatly on models of what concepts actually are; and a particular view of concepts and its Implications Is discussed. These Issues are explored with examples from the realm of biological thought.

Instructional Explanations in History and Mathematics

This paper extends a dialogue on the nature of explanations and instructional discourse to include instructional explanations. The paper examines instructional explanations at three levels: (a) the distinctions between specific types of explanations (common, disciplinary, self, and instructional) with respect to specific features (problem type, initiation, evidence, form, and audience); (b) the occasions within history (events, structures, themes, and metasystems) and mathematics (operations, entities, principles, and metasystems) that prompt explanations; and (c) critical goal states present in successful explanation (representations known, verbal discourse complete, nature of problem understood, principles accessed). Using these three levels, three examples of shared instructional explanations are explored, two in history and one in mathematics.

Creative Conceptual Change

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. I discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different task domains, autonomous robotic navigation and fictional story understanding.

Integrating Cognitive and Conversational Accounts of Conceptual Change in Qualitative Physics Learning

Empirical data on students' science learning has demonstrated that learning science is a very complicated and fine-grained process. Simple replacement models — destruct the misconception, and instruct the target concept - have failed to cope with the observation that both states of knowledge are not imitary, monolithic, tightly coupled systems. At the same time, expert-novice research has produced a long list of specific areas in which students and scientists are said to fundamentally differ, spanning all the way from perception to metacognition. The deep irony is that Cognitive Science research, which should make instruction easier, has in fact expanded the "great divide" by locating more and more ways in which students and scientists differ. Time has come to articulate the commonalties among students and scientists that enable conceptual change to occur. Students and scientists have commonalties both in cognition and conversation. Research in qualitative physics and epistemology is providing an account of physics learning in terms of re-using cognitive structures available to both students and scientists (i.e. p-prims and qualitative cases). Social studies of science show that turn-taking can allow negotiation of knowledge, both in everyday conversation and in the laboratory. This paper discusses research demonstrating the deep compatibility of cognitive and conversational accounts, and their potential symbiosis as an account of conceptual change in students' physics learning. In particular, I present data from students' use of a computer simulation, "The Envisioning Machine," which shows that students' conversational and cognitive processes can operate over the same basic data ~ qualitative physics knowledge - thereby allowing students to achieve conceptual change by simultaneously exploiting cognitive and sodal constraints.

Minimal Generative Explanations: A Middle Ground between Neurons and Triggers

This paper describes a class of procedures for discovering linguistic structure, along with some specific procedures and measures of their effectiveness. This approach is well-suited to problems like learning the forms of words from connected speech, learning word formation rules, and learning phonotactic constraints and phonological rules. These procedures acquire a symbolic representation, such as a list of word forms, a list of morphemes, or a set of context sensitive rules, each of which serves as the language-particular component of a generative grammar. Each procedure considers only a clearly defined set of possible generative grammars. This hypothesis space can be thought of as the procedure's "universal grammar". Procedures are evaluated for effectiveness by computer simulation on input consisting of naturally occurring language. Thus, they must be robust. That is, small changes to the input must lead to little or no change in the conclusions. This research program resembles the connectionist program in its focus on phenomena like wordsegmentation, morphology, and phonology, its emphasis on robustness, and its reliance on computer simulation. However, it is closer to parameter setting and learnability theory in its focus on learning generative grammars selected from a clearly defined hypothesis space, or "universal grammar". Further, to the extent that connectionism is about neural implementations while parameter setting and learnability theory are about universal grammars, the study of effective procedures for language acquisition stands at an intermediate level of abstraction.

Grammatical Complexity and the Time Course of Syntactic Acquisition

What is it about a child's linguistic competence that changes during syntactic development? Within the principles and parameters framework of Chomsky (1981), a child's grammar differ from an adult's in having different settings for certain parameters. The process of acquisition, under this view, consists of the sequence of parameter vectors which the child entertains as hypothesis grammars. If at some point in acquisition a parameter which is relevant for a particular construction is incorrectly set, the child will be unable to perform an adult-like analysis. While this view provides an answer to the "logical problem of language acquisition" it fails to explain why certain developmental stages exist. Beyond stipulated orderings of parameter settings, there is little that can be said in this framework to truly explain the time course of acquisition. In this paper, I argue that the stages of syntactic acquisition can be understood as deriving from an increase in the child's ability to handle granunatical complexity. I consider a number of wellattested acquisitional difficulties in a range of seemingly disparate aspects of syntax: relative clauses, control and verbal morphology. Using the formal system of Tree Adjoining Grammar (TAG), I show h o w the single hypothesis that children lack the ability to perform the T A G operation of o^/ommg relates these difficulties in a novel way, and provides us with a n ew type of explanation for the time course of syntactic development in terms of the complexity of formal grammatical devices.

Discovering Sound Patterns in the Native Language

Infants make their first contacts with their native language through its sound patterns. Research over the past twenty years has demonstrated that infants are well-equipped to perceive subtle distinctions in speech sounds and to cope with the variability that is present in the speech signal. At the same time, it is clear that in order to progress in acquiring a language, infants need to learn about the particular characteristics of the sounds and combinations of sounds that are used in their native language. Recent findings suggest that the time between 6 and 9 months of age may be a particularly fertile one for learning about the sound patterns of one's native language. There are indications that infants are developing sensitivity to distributional properties of sounds in the input at this time. The implications of these findings have for our understanding of processes underlying language acquisition will be considered.

The (il)Logical Problem of Language Acquisition

The fact that children often appear to learn little from corrective feedback has led theorists to construct the "logical problem of language acquisition" (LPLA). The idea is that, without further formal constraints, language can be formally proven to be unlearnable. This paper argues that the LPLA is based on a restricted view of the nature of language, the nature of the learner, and the nature of the learning environment. When we examine the full set of forces channeling language learning, including competition, recasts, conservatism, and indirect negative evidence, w e see that language is in fact highly learnable and the LPLA is not well motivated. In its place, we hope that language theory can focus on the analysis of overgeneralizations as ways of diagnosing the shape of underlying analogic pressures.

Discovering Structure-Function Relationships in a Competitive Modular Connectionist Architecture

The architectural properties of a neural network, such as its size, shape, and connectivity pattern, determine the network's functional properties. From an engineering viewpoint, discovering a suitable architecture for a given task is often the only way of achieving good performance on the task. From a cognitive science viewpoint, the study of structure function relationships helps in understanding the "modular" aspects of the mind/brain. We present a multi-network, or modular, connectionist architecture in which tasks are not pre-assigned to networks. Instead networks compete for the "right" to learn different tasks. Results suggest that the structure of each network biases the competition such that networks tend to leam tasks for which their structure is well-suited. Experimental findings suggest that the neural subsystem that encodes categorical spatial relations (e.g., on/off, above/below) is distinct from the one that encodes metric spatial relations {e.g., object A is 3.5 inches away from object B). Similarly, distinct subsystems seem to be responsible for recognizing a vlsu£il stimulus as a member of a category (e.g., dog) and for recognizing a stimulus as a specific exemplar (e.g., Fido). Furthermore, categorical spatial relations and category representations of shape are encoded more effectively in the left hemisphere, whereas coordinate spatial relations and exemplar representations of shape are encoded more effectively In the right cerebral hemisphere. W e have used computer simulations of artificial neural network models to show that differences in receptive field sizes can promote such organization. When visual input was filtered through relatively small nonoverlapplng receptive fields, networks learned to categorize shapes relatively quickly: In contrast, when Input was filtered through relatively large overlapping receptive fields, networks learned to encode specific shape exemplars or metric spatial relations relatively quickly. In addition. using the modular architecture described above, networks with small inonoverlapping receptive fields tended to win the competition for categorical tasks whereas networks with large overlapping receptive fields tended to win the competition for exemplar/metric tasks.

Computational Principles in Visual Comprehension: Simulating Neuropsychological Deficits by Lesioning Attractor Networks

A central challenge In cognitive neuroscience is to explain how disorders of brain function give rise to disorders of cognition. In this regard.connectionist modeling provides a useful computational formalism for relating cognitive processes to their underlying neurological Implementation. In the domain of visual comprehension of words and objects. I will show how two peculiar patterns of impairment observed after brain damage. deep dyslexia and optic aphasia, also arise in simulations embodying a set of general computational principles: (1) visual and semantic Information is represented as distributed patterns of activity over separate groups of units such that the patterns exhibit the appropriate similarities within and between these domains: (2) the knowledge of the relationships between representations is encoded as weights on connections between units: and (3) the mapping between representations is accomplished by Interactivity among umts, forming "attractors" for familiar patterns of activity. Further assumptions are that short-term correlational information is useful in object recognition but not In word recognition, and that there is less structure in the mapping from visual to semantic representations for words than for objects. In a simulation of word reading, damage leads to the peculiar Interactions of visual and semantic similarities in errors found in deep dyslexia. In a simulation of object naming, very few purely visual errors occur after damage but now semantic similarity Interacts with perseverative effects from previous trials, as in optic aphasia. The replication of complex empirical phenomena concerning Impaired visual comprehension of both words and objects provides evidence that the general principles underlying the simulations also apply to the semantic processing of visual information and its breakdown following brain damage In humans.

Neuropsychological Implications for Attention in Perception and Action

Organized behaviour requires the coordination of perception and action, both of which rely on information that Is spatially coded. The question is whether perception and action draw upon a common set of spatial representations or whether they rely on separate representations which must be linked to achieve integrated behaviour. These two alternative views are difficult to distinguish in normal behaviour but neuropsychological evidence obtained from patients with spatial Impairments may prove useful In addressing this Issue. Patients with unilateral neglect, a deficit In visuospatlal attention following right hemisphere damage, fail to report Information appearing on their contraleslonal left-hand side. Many of these same patients are also be Impaired at directing actions to their contraleslonal lefthand side. Experiments designed to examine the relationship between perceptual neglect and action (motor) neglect reveal a close correspondence between these deficits in at least some patients. These findings suggest a tight coupling of perception and action, indicating the use of a common spatial map.

Representation and Learning in Situated Agents

For my purposes, a situated agent is a system that has an ongoing interaction with a dynamic environment. It could be a mobile robot, a factory controller, or a software-based meeting scheduler. Traditional models of program specification and correctness are not directly suited for use in situated agents. What Is Important about such agents is their situatedness how they are connected to and affected by their environment. Situated automata theory, developed by Stanley Rosenschein and myself, provides a formal method for characterizing the Interactions between an agent and its situating environment. The designer of an agent can provide declarative, symbolic specifications of the agent's knowledge and behavior, but these specifications can be compiled into a compact, efficient computation to be performed by the agent. In addition, situated automata theory allows the analysis of different choices of representation of the internal state of an agent. This analysis provides a technical basis for arguing that, in many cases, traditional "symbolic" representations are inefficient and difficult to maintain correctly. It also points out cases In which symbolic representations are to be preferred.

Behavior-Based Artificial Intelligence

This paper attempts to define Behavior-Based Artificial Intelligence (AI) as a new approach to the study of Intelligence. It distinguishes this approach from the traditional Knowledge-Based approach in terms of the questions studied, the solutions adopted and the criteria used for success. It does not limit Behavior-Based AI to the study of robots, but rather presents it as a general approach for building autonomous systems that have to deed with multiple, changing goals in a dynamic, unpredictable environment.

Situated Decision-Making and Recognition-Based Learning: Applying Symbolic Theories to Interactive Tasks

This paper describes two research projects that study typical Situated Action tasks using traditional cognitive science methodologies. The two tasks are decision making in a complex production environment and interaction with an Automated Teller Machine (ATM). Both tasks require that the decision maker and the user search for knowledge in the environment in order to execute their tasks. The goal of these projects is to investigate the interaction between internal knowledge and dependence on external cues in these kinds of tasks. W e have used the classical expert-novice paradigm to study information search in the decision making task and cognitive modeling to predict the behavior of A T M users. The results of the first project strongly indicate that decision makers are forced to rely on environmental cues (knowledge in the environment) to make decisions, independently of their level of expertise. W e also found that performance and information search are radically different between experts and novices. Our explanation is that prior experience in dynamic decision tasks improves performance by changing informadon search behavior instead of inducing superior decision heuristics. In the second study w e describe a computer model, based on the Soar cognitive architecture, that learns part of the task of using an A T M machine. The task is performed using only the external cues available from the interface itself, and knowledge assumed of typical human users (e.g., how to read, how to push buttons). These projects suggest that tasks studied by Situated Action research pose interesting challenges for traditional symbolic theories. Extending symbolic theories to such tasks is an important step toward bridging these theoretical frameworks.

Symposia

Symposium In Memory of Allen Newell

This is a symposium organized in memory of Allen Newell. Allen's central focus throughout his long and productive research career was the nature of the mind. The approach he pioneered in this study was the development of Unified Theories of Cognition. A unified theory of cognition (UTC) is a single set of interacting mechanisms that jointly support the full breadth and richness of uman cognition. Though no theories have yet come close to full breadth or richness, significant progress is being made along a number of fronts. Continuing to strive to reduce the remaining difference is one of the most exciting and crucial challenges facing cognitive science today. During the last decade of Allen's career, his efforts focused on the development of a particular candidate unified theory, Soar. At the Twelfth Cognitive Science Conference in 1990, we presented, in symposium form, an update on the status of Soar as a UTC. We could think of no more appropriate way to remember Allen, and his lifelong committment to his science, than to use the present opportinity to provide a second update on this topic. The presentations here were selected to illustrate some of the breadth and depth of the development of Soar as a UTC. The first presentation simply provides a general overview of Soar and its use as a UTC [Rosenblooim]. The subsequent presentations focus on three particular research efforts: visual attention[Wiesmeyer], sentence comprehension [Lewis], and learning from instruction [Huffman],These efforts share a significant recent trend in the development of Soar by focusing on its interaction with the external environment. Among the three, they span behavior in a range of time scales for human cognition (from milliseconds [Wiesmeyer], to seconds [Lewis], to minutes [Huffman]), and more general qualitative properties of human behavior

Overview of Soar as a Unified Theory of Cognition: Spring 1993

This article provides a very brief overview of the current status, as of Spring 1993, of Soar as a unified theory of cognition. Moreover, it serves to set the stage for the detailed discussions of individual Soar systems in the three papers that follow. We begin by summarizing the structure of Soar as a cognitive system, and then outline its status as a unified theory of cognition

NOVA, Covert Attention Explored Through Unified Theories of Cognition

Covert visual attention is a subtle part of human vision that has been widely researched in the psychology community. Most often visual attention is thought to involve movements of the eyes or head; however, covert visual attention does not involve overt movements of any sort. It has often been described in an homuncular sense as the "mind's eye." This paper introduces both a new model of covert visual attention and a new approach in which to investigate attention. The approach is based on five assertions: (1) Development of models of attentional processes should occur in the context of a fixed, explicit model of nonattentional processes. (2) Evaluation of attentional models should occur in the context of complete tasks. (3) Judgment of the quality of an attentional model should be with respect to its ability to cover many tasks while maintaining constant parameters. (4) Computer implementation and simulation of an attentional model and the tasks it claims to cover should be used for demonstrating its sufficiency. (5) A process model (a model that seeks to correspond at some level of analysis to actual mechanisms of behavior) should be able to account fw both the timing and the functions of behavior. N O V A (Not Overt Visual Attention), the first operator-based model of covert visual attention, is based on the Model Human Processor [Card, Moran, and Newell, 1983], a model of nonattentional processes that has been applied successfully in Human-Computer Interaction (HCI) research. In this paper w e review the results of using N O V A to model seven qualitatively different immediate-response tasks from the psychological literature. As a test of the sufficiency of N O V A , we implemented N O V A and each of the task models in the Soar cognitive architecture, a computer model of human behavior that has been proposed as the basis of Newell's "Unified Theories of Cognition" (UTC) [Newell, 1990]. N O V A is both a new theory of attention and a framework in which existing theories of attention have been unified.

An Architecturally-based Theory of Human Sentence Comprehension

Real-time language comprehension is an important area of focus for a candidate unified theory of cognition. In his 1987 William James lectures, Allen Newell sketched the beginnings of a comprehension theory embedded in the Soar architecture. This theory, NL-Soar, has developed over the past few years into a detailed computational model that provides an account of a range of sentence-level phenomena: immediacy of interpretation, garden path effects, unproblematic ambiguities, parsing breakdown on difficult embeddings, acceptable embedding structures, and both modular and interactive ambiguity resolution effects. The theory goes beyond explaining just a few examples, it addresses over 80 different kinds of constructions. Soar is not merely an implementation language for the model, but plays a central theoretical role. The predictive power of NLSoar derives largely from architectural mechanisms and principles that shape the comprehension capability so that it meets the real time constraint.

Learning from Instruction: A Knowledge-level Capability within a Unified Theory of Cognition

How does working within a unified theory of cognition - an architecture - provide useful constraint when modeling large timescale tasks, where performance Is primarily determined by knowledge, rather than the architecture's basic mechanisms? W e present a methodology for extracting the constraint that comes from the architecture, by deriving a set of architectural entailments which as for certain model properties over others. The methodology allows us to fzictor the effect that various architectural properties have on a model. We demonstrate the methodology with a case study: a model of learning procedures from natural language instructions, Instructo-Soar, within the Soar architecture.

Symposium: Tutorial Discourse

The striking effectiveness of one-on-one tutorial instruction by human tutors has sparked great interest in efforts to emulate that effectiveness with artificially intelligent computerized instructional systems. Despite some success in that endeavor, present intelligent tutoring systems circumvent, evade, and finesse the problem of natural language interaction in various ways because the demands of tutorial interaction are really beyond the state of the art in computerized natural language. This symposium presents research relevant to overcoming that limitation. Human tutorial interaction is being studied from the perspectives of linguists (Fox), psychologists (Graesser), and computational linguists (Moore, Evens) who aim to emulate it in artificial systems. Among the issues that arise in these studies are the size or scope of the discourse organization imparted by the tutor, the balance between the tutor's agenda and immediate responsiveness to the student, the extent to which tutors revise their plans dynamically, the nature and breadth of knowledge required to support these interactions, the relationship between tutorial interaction and normal conversational patterns, and the nature of repair and correction processes, including the use of positive, neutral and negative feedback. The ease or feasibility of emulating these features of human tutorial discourse certainly varies, but it is also true that the introduction of a computer as a conversational participant is a significant change: what is the perceived social status or role of a computer? Similarly, it is possible that ideal computerized tutorial discourse might differ from what is observed among humans. The diverse research perspectives required to address these issues typify the interdisciplinary character of cognitive science.

Correction in Tutoring

The goal of the current paper is to describe the results of an empirical study of tutoring dialogue, with special attention on the issue of correction in tutoring. In particular, this paper presents findings which strongly suggest that the ever> day preference for self-correction (Pomerantz, 1975) is maintained even in a heavily knowledge-asymmetric situation like tutoring. For further details of this study, readers should consult Fox (1993).

Dialogue Patterns and Feedback Mechanisms during Naturalistic Tutoring

Although it is well documented that one-to-one tutoring is more effective than alternative training methods, there have been few attempts to examine the process of naturalistic tutoring. This project explored dialogue patterns in 44 tutoring sessions in which graduate students tutored undergraduate students on troublesome topics in research methods. W e analyzed pedagogical strategies, feedback mechanisms, question asking, question answering, and pragmatic assumptions during the tutoring process.

What Makes Human Explanations Effective?

If computer-based instructional systems are to reap the benefits of natural language interaction, they must be endowed with the properties that make human natural language interaction so effective. To identify these properties, we replaced the natural language component of an existing Intelligent Tutoring System (ITS) with a human tutor, and gathered protocols of students interacting with the human tutor. W e then compared the human tutor's responses to those that would have been produced by the ITS. In this paper, I describe two critical features that distinguish human tutorial explanations from those of their computational counterparts.

Synthesizing Tutorial Dialogues

This paper discusses problems of synthesizing tutorial discourse in an intelligent tutoring system, Circsim-Tutor, designed to help first year medical students solve problems in cardiovascular physiology involving the negative feedback system that controls blood pressure. In order to find out how human tutors handle discourse problems we have captured both face-to-face and keyboard-tokeyboard tutoring sessions in which two of the authors ( J A M and A A R ) tutor their own students. This paper focusses on the ways in which tutors tell students that they have made an error. W e describe a classification scheme for negative acknowledgments and examine the frequency with which different types of acknowledgments occur in face-to-face and keyboard-to-keyboard sessions. Our tutors seem to make more explicit negative acknowledgments than do the tutors studied by Fox, but their acknowledgments often lead into hints that help the student continue forward in the problem-solving process. W e have collected initial data about the ways in which our tutors combine hints and negative acknowledgments.

Symposium: Grounding, Situatedness, and Meaning

This symposium is concerned with the notions of grounding and situatedness and their relevance to cognitive theories in general, and to theories of meaning in particular. Grounding is to be understood as the linidng of system-internal objects (such as symbols or concepts) widi the external objects they are about, through the system's sensorimotor interaction with them. Situatedness is to be understood as the immediate coupling of the system's actions with its environment The discussion will be organized around the following questions: 1. How much does the concept of grounding contribute to theories of meaning? 2. Is grounding important for understanding natural cognitive systems and/or for designing artificial ones? 3. Is it possible to accommodate different notions of grounding (e.g., Hamad's and Brooks's senses) in a unified frameworic? 4. What is needed to extend current models of grounding so that they go beyond simple word semantics? S. Should grounding be inextricably tied to situatedness? Is the notion of representation incompatible with the sort of dynamical system that seems to be called for in a situated model of cognition? The participants cover a wide range of views on these issues. Touretzky and Christiansen & Chater challenge the usefulness of the symbol grounding idea. Touretzky argues that the grounding of symbols in perception has little to say about what should be of most interest to cognitive scientists: how conceptual structures are constructed out of symbols. Christiansen and Chater consider the contributions of the notion of symbol grounding and of philosophical theories of meaning to each other. Their position is that the former has more to learn from the latter than vice versa. The other speakers come out on the side of one or another sort of grounding approach but disagree on whether symbols as such are necessary and on the promise of connectionist approaches to grounding. Hamad argues that symbol grounding is indeed a problem and among several candidate approaches he advocates a hybrid analog/connectionist/symbolic one because connectionism alone does not seem to be able to do the job. Brooks proposes an engineering approach to grounding: build a system based on some notion of physical grounding and see how much further it can be taken than existing systems based on the Physical Symbol System hypothesis. Lakoff argues that recent work on the grounding of spatial predicates by Terry Regier provides the foundations for grounded concepts without symbols. Dorf&ier and Prem make the case that a radical form of connectionism can address both grounding and situatedness and that grounded symbols, while not required for intelligent behavior, can enhance the performance of autonomous agents. Gasser argues that alongside the problem of grounding atomic symbols, there is tiie problem of grounding the structure of concepts, which cannot be handled by a symbol system alone.

Connectionism, Symbol Grounding, and Autonomous Agents

In this position paper we would like to lay out our view on the importance of grounding and situatedness for cognitive science. Furthermore we would like to suggest that both aspects become relevant almost automatically if one consequently pursues the original ideas from connectionism. Finally we discuss the relevance of grounding for theories of meaning and the possible contribution of symbol grounding for autonomous agents.

The Structure Grounding Problem

Work on grounding has made a start towards an understanding of where simple perceptual categories come from. But human concepts are made up of more than the simple categories of these models; concepts have internal structure. Within the visual/ spatial domain, it is necessary to go beyond an account of how "square" and "above" are grounded to an account of how "here is a square above a circle which is to the left of a triangle" is grounded. Conceptual/ linguistic structure is not just arbitrary patterning which falls out once the object and relation categories have been identified. Rather, it reflects fundamental aspects of the perception of objects and relations. Thus there is a need to ground the structure as well as the categories which make up concepts.

Symbol Grounding - the Emperor's New Theory of Meaning?

What is the relationship between cognitive theories of symbol grounding and philosophical theories of meaning? In this paper we argue that, although often considered to be fundamentally distinct, the two are actually very similar. Both set out to explain how non-referring atomic tokens or states of a system can require status as semantic primitives within that system. In view of this close relationship, we consider what attempts to solve these problems can gain from each other. W e argue that, at least presently, work on symbol grounding is not likely to have an impact on philosophical theories of meaning. On the other hand, we suggest that the symbol grounding theorists have a lot to learn from their philosophical counterparts. In particular, the former must address the problems that have been identified in attempting to formulate philosophical theories of reference.

The Hearts of Symbols: Why Symbol Grounding is Irrelevant

Upon closer examination, and depending on who you read, "symbol grounding" turns out to be either the induction of trivial sensory predicates or the relabeling of a large portion of intelligent behavior as "transduction." Neither activity shows much promise for advancing our understanding of intelligence, although symbol grounding does have some utility in philosophical debates. The proper con- COTi for symbol processing researchers, both connectionist and classical, is to construct and manipulate symbols, not to ground them.

Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component

"Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These growided elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems.

Symposium: Decision Making in Real World Emergency Situations

This symposium is concerned with models of decision-making under time constraints. While decision making has been the subject of considerable investigation in cognitive science for a number of years, the earlier work tended to concentrate on abstracted decontextualized laboratory models. More recently, however, a number of approaches have evolved that attempt to study real-world, real-time cognitive behavior in complex dynamic environments. All four participants in this symposium will focus on decision-making under emergency and/or time constrained real-life circumstances such as 9-1-1 emergency assistance, air traffic control, and trauma management. These studies involve rapid judgments using partial and sometimes unreliable information that affect public health and safety. The presentations bring together multidisciplinary approaches in cognitive science and a large range of theoretical perspectives, from standard models of information processing to models of situated cognition. These different perspectives make a considerable difference in the kinds of phenomena that we need to examine i n empirical research. The study of real-world dynamic environments necessitates the use of complex methods of data collection and analysis, from the use of videotapes to actual participant observations. These various theoretical and methodological approaches are used by the participants (Hunt, Patel, Horst, and Whalen) to study decision-making in "messy", realistic activities, requiring immediate intervention. The approaches are relevant to other areas where similar issues of complexity and urgency must be dealt with.

"Deciding" as Situated Practice: The Work of Public Safety/9-1-1 Call-Takers

A detailed investigation of 9-1-1 operations, using participant observation and video recordings, when taken together with other naturalistic studies of practical human conduct, suggests that the prevailing cognitivist approach to decision making has a number of limitations. This observational and video data provides a framework for an ethnomethodological respecification of the phenomenon.

Symposium: Cognitive Models in User-System Dialogue

What role can cognitive models of dialogue play in supporting conversations between humans and machines? Our purpose in this symposium will be to enhance the community's understanding of user/system dialogue through debate. Three speakers will argue either against the importance of cognitive models for user/system dialogue, for the importance of cognitive modeling, or relating both positions to current applied work In dialogue management system building. The positions of the three speakers are: Wendy Kellogg: Cognitive models aren't directly useful - unless w e allow "cognitive modeling" to extend beyond the skin and incorporate the artifacts of user/system interaction. Robustness in dialogue management comes from task analyses. analysis of system usability issues, and a deep understanding of plans and situated actions (e.g., Suchman, 1987). Dr. Kellogg will use examples from user Interface research and design practice to support this position. David Novick: The only way to achieve dialogue management that is robust across tasks is with cognitive modeling. Only with a deep understanding of human memory and performance strategies and limitations can we generalize techniques across domains and tasks in ways that are computationally useful. Dr. Novick will present data and examples from his work to support this position. Hans Brunner: The point of cognitive rrvxleing is to understand fundamentally how humans use language and artifacts to interact. While such information is essential, the point of expressing cognitive models is to obtain methodologtoal assistance, not theoretical tyranny over practice. Dr. Bmnner will provide examples from current system building to support this position. Each panelist will each briefly present their respective position. As session chair. Bannifi Webber will then moderate a discussion, probing and expanding on areas of disagreement and agreement. In the final 15 minutes, members of the panel will address comments, questions and challenges from the audience.

Symposium: (Quasi-)Systematicity and (Non-)Compositionality in Language

This symposium examines certain fundamental assumptions about structural systematicity and semantic compositionality in language implicit in generative linguistic theory and often adopted uncritically in cognitive science research. Generative analyses have presupposed that most linguistic phenomena can be described in terms of highly general (and hence highly productive) rules of phonological, morphological, syntactic, or semantic combination. Idiosyncractic linguistic phenomena are either relegated to the lexicon or attributed to contextual effects and effectively ignored by the rule-based grammar. However, assumptions about regularity in language (and rule-governed analyses thereof) both originated and are currently maintained only by the examination of a small set of often artificial examples. Moreover, the idealization away from performance data, that is, actual language use, tends to make linguistic rules seem more general and exceptionless than they turn out to be. Studies that involve exhaustive examination of the range of words or constructions to which a particular rule applies demonstrate that there are serious empirical problems with the general-rule approach. Constructions to which general rules are alleged to apply are in fact subtly idiosyncratic. O n the other hand, there are lowlevel regularities often overlooked by generative accounts that merit theoretical attention. The symposium participants will present perspectives on the study of language by cognitive linguists and connectionists, who take an exemplar-based approach to extracting regularity from Unguistic phenomena, revealing in the process that certain "irregularities" in language are in fact due to nonprototypicality. The panelists will illustrate the underspecificity, contextsensitivity, partial systematicity and partial compositionality of language, as evidenced in analyses of data at various linguistic levels. The discussion which follows will focus on ways in which computational models of language representation or processing might proceed without embracing assumptions of complete systematicity or perfect compositionality. Network models, in particular, seem to be highly compatib

Symposium: Cognitive Models of Problem Solving

This symposium highlights what Cognitive Science has gained from recent research in cognitive modelling. The symposium brings together several research teams that share the methodological approach of refining computational models through empirical studies of human problem solvers. The models include the A C M E constraint satisfaction network model, the S M E model of analogy, the CASCADE process model of problem solving, and Case-Based Reasoning models. These projects represent a diverse set of approaches to computational modelling, and embody very different architectures for cognition. While each approach has contributed unique findings, these models also share a unifying set of assumptions about how cognitive constraints structure problem solving processes. By presenting these diverse approaches to cognitive modeling within a single session, we plan to promote discussion of the principles underlying all such models, and highlight the progress that can be made by combining constraints from computation and cognition.

Submitted Presentations

Processing Time-warped Sequences Using Recurrent Neural Networks: Modelling Rate-dependent Factors in Speech Perception

This paper presents a connectionist approach to the processing of time-warped sequences and attempts to account for some aspects of rate-dependent processing in speech perception. The proposed model makes use of recurrent networks, networks which take input one at a time and which can pick up long-distance dependencies. Three recurrent network architectures are tested and compared in four computational experiments designed to assess how well time-warped sequences can be processed. The experiments involve two sets of stimuli, some of which reflect aspects of rate dependent processing in speech; one where the sequences are distinguished by the way their constituent elements are sequentially ordered, and another where the sequences share similar arrangement of the constituent elements but differ in the duration of some of these elements. The results establish certain conditions on rate-dependent processes in a network of this type vis-a-vis the obligatory use of rate information within the syllable, and throw some light on the basic computer science of recurrent neural networks.

A Structured Representation for Noun phrases and Anaphora

I present a computationally-based representation for indefinite noun phrases and anaphora that models their use in natural language. To this end, three goals for knowledge representation for natural language processing: natural form, conceptual completeness, and structure sharing are described. In addressing these goals, an augmentation to the representation of variables (corresponding to indefinite noun phrases or anaphora) so that variables are not atomic terms is suggested. This leads to an extended, more "natural" representation. It is shown how this representation resolves some representational difficulties with sentences with nonlinear quantifier scoping, in particular, donkey sentences.

Modelling Learning Using Evidence from Speech and Gesture

Speech and gesture provide two different access routes to a learner's mental representation of a problem. W e examined the gestures and speech produced by children learning the concept of mathematical equivalence, and found that children on the verge of acquiring the concept tended to express information in gesture which they did not express in speech. We explored what the production of such gesture-speech mismatches implies for models of concept learning. Two models of a mechanism that produces gesture-speech mismatches were tested against data from children learning the concept of mathematical equivalence. The model which best fit the data suggests that gesture and speech draw upon a single set of representations, some of which are accessible to both gesture and speech, and some of which are accessible to gesture but not speech. Thus, gesture and speech form an integrated system in the sense that they do not draw upon two distinct sets of representations. The model implies that when new representations are acquired, they are first accessible only to gesture. Over time, they are then recoded into speech.

Tip-of-the-Tounge in Dementing Speech

We study speech production difficulties in speakers with dementing illnesses by inducing tip-of-the-tongue (TOT) states. W e found that dementing speakers experienced TOTs but were unable to supply any information about the target, unlike an age-matched control group. W e distinguish between items generated by the subjects as relatives of the targets, subjects' own target words, and what we call "constructive search" words that subjects use in their search for the target. When related words came to mind, they were almost all semantic relatives of the target, whereas in nondementing adults, phonological relatives are also reported. W e interpret the results in terms of a three level interactive account of lexicalization. W e propose that the retrieval deficit in dementia occurs in the first stage of lexicalization, that of retrieving abstract lexical forms from a semantic specification, rather than in a second stage of retrieving phonological forms.

Word Segmentation in Written Text: an Argument for a Multiple Subunit System

Two types of word stimuli, easily syllabified (eg. balcony) and ambisyllabic English words (eg. balance), were used in a reading task designed to determine if words are processed using strictly syllables as the unit of segmentation or if multiple units of segmentation are used. W e could not replicate work done by Prinzmetal, Treiman, and Rho (1986) who found that, with text, more illusory conjunction occurred within syllables than across syllable boundaries. In contrast, our work supports the hypothesis that English is too complicated of a language to use only one segmenting unit. Thus, the pattern of results was dependent on the structure of the words themselves, with the ambisyllabic words being processed using phonemes and not syllables as the unit of segmentation.

Trails as Archetypes of Intentionality

Animal trails ought to be investigated as an archetypal Intentional phenomenon. A trail is Intentional in that it has significance beyond its immediate physical properties, and because the use of trails involves characteristic Intentional states: an animal must in some sense be seeking a destination, the animal must be able to determine which trail it ought to take, must be able to follow it, and must feel some urgency about staying on the trail. Trails evolved along with the abilities to use them. Thus trails and trail-use are not just good exemplars of Intentionality: trails are an archetypal form of Intentionality. It is likely that in some animals there are special brain mechanisms for interacting with trails, and these mechanisms, devoted as they are to an Intentional phenomenon, can shed light on the brain's implementation of other cispects of Intentionality. To understand the phenomenon of Intentionally we must look at as many exemplars as we can. Trails are especially worthy of study because they are external to individual animals, they are socially constructed and historically contingent, and their Intentionality subserves activity.

How Diagrams can Improve Reasoning: Mental Models and the Difficult Cases of Disjunction and Negation

We report two experiments on the effects of diagrams on reasoning. Both studies used "double disjunctions", e.g.: Raphael is in Tacoma or Julia is in Atlanta, or both. Julia is in Atlanta or Paul is in Philadelphia, or both. What follows? Subjects find it difficult to deduce a valid conclusion, such as: Julia is in Atlanta,

Word Priming in Attractor Networks

We propose a new view of word priming in attractor networks, which involves deepening the basins of attraction for primed words. In a network that maps from orthographic to phonological word representations via semantics, this view of priming leads to novel predictions about the interactions between orthographically and/or semantically similar primes and targets, when compared on an orthographic versus a semantic retrieval task. W e confirm these predictions in computer simulations of long-term priming in a word recognition network. Connectionist models have strongly influenced current thinking about the nature of human memory storage and retrieval processes. One reason for their appeal is that they can account for a wide range of human performance on tasks such as word recognition (McClelland and Rumelhart, 1981), reading (Seidenberg and McClelland, 1989), and repetition priming (McClelland and Rumelhart, 1986). Further, because connectionist models make relatively specific assumptions about the mechanisms of cognitive processes, they can lead to novel predictions about human performance. One of the most exciting developments in the last decade of human memory research is the characterization of implicit memory (Graf &; Schacter, 1985; Schacter, 1985), a form of automatic, unconscious retrieval of previously encountered material. A widely used experimental method for testing implicit m e m ory is repetition priming, in which the accuracy or speed of processing is measured on successive presentations of a target stimulus. Evidence of implicit memory is observed when subjects are more accurate or efficient in responding to previously studied targets than to new targets. The priming literature is highly relevant to connectionist models of learning and memory for two reasons: 1) Priming effects can be extremely long-lasting, ranging from minutes to many hours, or even months, and apparently reflect fundamental automatic ("unsupervised") learning processes employed by the brain. 2) W h e n the prime and target are not identical, but have similar input and/or semantic features, the priming effects may range from facilitation to inhibition; these effects can shed light on the nature of h u m a n memory organization, and provide constraints on the representations employed in connectionist models. In this paper, w e first review the previous connectionist accounts of priming. W e then propose a new view of word priming in attractor networks with orthographic and semantic levels of representation, which involves deepening the basins of attraction for primed words. This leads to some novel predictions about the interactions between primes and targets, which we explore in computer simulations.

Learning Generic Mechanisms from Experiences for Analogical Reasoning

Humans appear to often solve problems in a new domain by transferring their expertise from a more familiar domain. However, making such cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions used by designers. However, one important yet unexplored issue is where these generic mechanisms come from. W e hypothesize that they are acquired incrementally from problem-solving experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we show that mental models in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we show how the model-based learning method integrated with similarity-based learning addresses the issues in generalization from experiences.

The Time Course of Grammaticality Judgement

Two experiments investigating the time course of grammaticality judgment are presented, using sentences that vary in error type (agreement, movement, omission of function words), part of speech (auxiliaries vs. determiners) and location (early vs. late sentence placement). Experiment 1 is a word-by-word "gating" experiment, similar to the word-level gating paradigm of Grosjean (1980). Results show that some error types elicit a broad and variable "decision region" instead of a "decision point," analogous to results for word-level gating. Experiment 2 looks at on-line judgments of the same stimuli in an RSVP (Rapid Serial Visual Presentation) paradigm, with reaction times measured from several different points within each sentence based on the results of Experiment 1. Qualitatively different results are obtained depending on how and where the error point is defined. Results are discussed in terms of interaction activation models (which do not assume a single resolution point) and discrete parsing models.

Reminding and Interpretation During Encoding

To understand and aa upon new experiences, people may draw on specific past experiences. Analogical transfer models suggest that past experiences are used following the encoding of the new information. Research on reminding s suggests that specific experiences may be accessed during encoding, and text comprehension research suggests available information may be used to interpret incoming material. Together, these findings suggest a more dynamic organization of interpretation processes, where past experiences may be accessed and used during encoding. This paper repeats on two experiments that explore the relationship between reminding and interpretation. In particular, the question was whether remindings that occur early during encoding might bias later interpretation. The first experiment used a postprocessing measure of interpretation, while the second measured on-line sentence reading times. In contrast to most contemporary models of analogical transfer, we found that remindings m a y influence interpretation during encoding.

The Theory-Ladenness of Observation: Evidence from Cognitive Psychology

In this paper we examine the theoretical and empirical work in psychology that is relevant to Hanson's (1958) and Kuhn's (1962) arguments for the theory ladenness of observation. W e conclude that the data support the Hanson and Kuhn position against the earlier positivist views that sensory data provides an completely objective basis for deciding between rival scientific theories. However, the data also suggest that top-down influences on perception are only strong when the incoming sensory input is weak or ambiguous. Thus, in cases where the bottom-up sensory evidence is strong and unambiguous, there is Uttle evidence that theory can override observation, and so the data do not support the strong form of the theory-laden position that is sometimes attributed to Hanson and Kuhn. In addition we argue that philosophical work on theory-ladenness has focused too narrowly on the issue of perception and ignored attention and memory. Our analysis suggests the need for a much broader view of the mental processes involved in doing science, and our synthesis of the empirical literature shows the influence of top-down schemata on perception, attention, comprehension, and memory. This top-down, bottom-up synthesis seems to us to provide a satisfying resolution of the controversy over the theory-ladenness of perception.

A Restriced Interactive Model of Parsing

Much of the controversy surrounding the autonomy of syntax issue has focused on whether prepositional phrase (PP) attachments can be influenced by prira- discourse context. W e briefly review four studies that have produced results supporting either the autonomy view or the interactive view, and then describe the results of a recent series of experiments that identify the conditions under which a read^ will be garden pathed when encountering a structurally ambiguous PP. The results of these experiments suggest that a reader can avoid being led down the garden path by a discourse that successfully creates referential ambiguity, but only in sentences where the verb does not require the P P in order to be grammatical. W e also describe a restricted interactive parser that can account for these empirical results. The parser divides its task between two processors that use limited forms of semantic and discourse information when making attachments.

Expertise, Text Coherence, and Constraint Satisfaction: Effects on Harmony and Settling Rate

This paper reports three experiments showing that 17 experts' mental representations had significantly higher harmony and faster settling rates than 638 novices' when activation was spread through the representations in a simulation of thinliing; that when coherent texts were read by novices, they produced mental representations with significantly higher harmony and faster settling rates than less coherent texts; and that novices whose representations matched the experts' mental representations had significantly higher harmony and faster settling rates. The results were found for declarative experts in history and procedural experts in literary interpretation, for novice groups including U.S. Air Force recruits and undergraduates, and for both history texts and literary texts. These results were consistent with our hypothesis that the quality of a person's prior knowledge determines the harmony and settling rates of their representations and that these can be measured by simulating the spread of activation trough the person's mental representation of a subject matter domain. Harmony may also be used as a metacognitive signal. In these studies, we investigated the quality of mental representations. To do this, first we measured each subject's mental representation for a domain, and then we

Orientation and Complexity Effects: Implications for Computational Models of Visual Analogical Reasoning

Several computational models have recently been proposed to define or describe visual representations. Is it reasonable to accept these models as plausible explanations of human visual processing? One way to address this question is to examine whether the models are affected by variables that have been shown to affect human visual analogical reasoning. Two such variables are stimulus complexity and differences in orientation of stimuli that must be compared. Unfortunately, the experiments that have been used to uncover these effects typically use stimuli that are too complex to be easily defined within the structure of computational models. In the present papa- this problem is resolved by producing the standard set of results for complexity and orientation with a set of easily defined stimuli. We therefore see this work as a preliminary step in the comparison of human and computational models of visual processing. We report results of a human experiment investigating mental rotation and complexity effects as well as an attempt to mimic these data with an implementation of one computational model.

Letter Detection in German Silet Reading: Issues of Unitization and Syllable-Final Devoicing

In a German variant of a letter-detection experiment, native speakers of German read passages in German, searching for the letters d or t. Many more instances of the letter d. in definite articles and in the word and were missed than were missed in nouns, verbs, and adjectives. Subjects also missed more syllable final instances of the letter d. than syllable initial d or syllable-final t. The first finding supports earlier similar findings by Healy (e.g., 1976) for English, and Ferstl (1991) for German, with respect to high frequency words in the language being read in units larger than the letter. The second finding is understood in terms of the German phenomenon of neutralizing the difference in pronunciation between d and i in syllable-final position.

Double Dissociation in Artifical Neural Networks: Implications for Neuropsychology

We review the logic of neuropsychological inference, focusing on double dissociation, and present the results of an investigation into the dissociations observed when small artificial neural networks trained to perform two tasks are damaged. W e then consider how the dissociations discovered might scale up for more biologically and psychologically realistic networks. Finally, w e examine the methodological implications of this work for the cornerstone of cognitive neuropsychology: the inference from double dissociation to modularity of function.

A Better Tool for the Cognitive Scientist's Toolbox: Randomization Statistics

Cognitive Science has typically proceeded with two major forms of research: model-building and experimentation. Traditional parametric statistics are normally used in the analysis of experiments, yet the assumptions required for parametric tests are almost never met in Cognitive Science. The purpose of this paper is twofold: to present a viable alternative to traditional parametric statistics—the randomization test—and to demonstrate that this method of statistical testing is particularly suited to research in Cognitive Science.

Whorfian Biases in the Categorization of Events

The purpose of this paper can be viewed from two different but related perspectives. First, it investigates the role of language in concept learning. Second it investigates this by asking how event categories are acquired. From the first perspective, our experiment showed that syntactic Whorfian biases in human categorization actually exist. The new methodology employed provides us with a tool to study in more detail the nature of those biases. The Whorfian hypothesis does not necessarily have to be tested cross-culturally. It can successfully be taken into the lab with subjects belonging to the same linguistic community. From the second perspective, our experiment showed that the addition of verbal descriptors to a set of animation scenes in some systematic way facilitates learning of the regularities in the scenes. This result is a new piece of evidence supporting the focused sampling theory of category acquisition.

Reading as a Planned Activity

This paper discusses how the goals of a reader lead to reading as a planned activity. The external situation provides many of these goals, as well as background, which allow the reader to focus the reading activity. A system is described in which a plan for perusing a given piece of text is chosen with regard to the reading goal(s) and the structure and content of the text. As reading progresses and goals are satisfied or change, the plan is modified using adaptive planning. This model is supported by a protocol study of subjects reading instructions to use a fax machine for the first time. The main sections of this paper will: 1) describe reading in terms of a larger context of activity; 2) introduce the necessary components needed for a reader that plans; and 3) illustrate planned reading in the domain of device instructions.

Situated Sense-Making: A Study of Conceptual Change in Activity-Based Learning

Sense-making is an essential process in learning for understanding. W e describe a study of sense-making involving two pairs of students learning basic elements of the visual programming language Prograph. The study emphasizes the critical role of activity in mediating concept development and refinement Video protocols of learning behavior were recorded and analyzed. The analysis focuses on the situated nature of the meaning construction process. It reveals how exploration, explanation, and expectation play important roles in the sense-making process.

Barriers to Conceptual Change in Learning Science Concepts: A Theoretical Conjecture

This paper identifies and characterizes the existence of a specific class of "constojcts" which may be particularly difficult to learn and understand. Their difficulty necessitates conceptual change, which is a form of learning which we define in the context of this class of constructs. Our explanation seems to fit a diverse set of data concerning the difficulty in learnirig science concepts of this nature. Instructional implications for how w e can overcome this barrier to conceptual change will also be entertained.

Factors that Influence ow People Respond to Anomalous Data

In order to understand conceptual change, it is crucial to understand how people respond to anomalous information. The purpose of this paper is to present a framework for understanding how people respond to anomalous data and why they respond as they do. First, w e present a taxonomy of seven responses to anomalous data. Second, we present an analysis of eight factors that are hypothesized to influence which of these seven responses an individual will choose. Finally, w e present the results of an experiment that investigates severjj of these eight factors. A key to understanding conceptual change is understanding how people respond to anomalous information. Information that contradicts an individual's current beliefs is important because without it, an individual has no need to alter current conceptions. Without the goad of anomalous information, current conceptions are perfectly adequate for understanding the world. A particularly important form of anomalous information is anomalous data. Anomalous data have played a central role in conceptual change in the history of science (Kuhn, 1962) and in science education (Chinn & Brewer, in press). Moreover, most artificial intelligence systems that model scientific discovery and theory change use jmomalous data to trigger the theory change process (e.g., Kulkarni & Simon, 1988). Chinn and Brewer (1992, in press) have proposed a detailed taxonomy of possible responses to emomalous data. W h e n an individual w h o holds theory A encounters anomalous data, which may be accompjmied by an alternative theory B, the individual can choose one of seven responses to the anomalous data: 1. Igitore the data. 2. Reject the data because of methodological flaws, random error, or alleged fraud. 3. Exclude the data from the domain of theory A b asserting that theory A is not intended to explain th data. 4. Hold the data in abeyance, i.e., concede that theory A cannot explain the data at present but asset that theory A will be elaborated in the future so tha it can explain the data. 5. Accept but reinterpret the data so as to make thi data consistent with theory A. 6. Accept the data and make minor, peripheral changes to theory A. 7. Accept the data and change theories, possibly ti theory B. Of these seven responses, only the last two involv any change m theory A , and only the last produces change that can be called conceptual change. Th first six responses are theory-preserving response because the individual discounts the anomalous data in order to protect theory A. In this paper, we address a crucial issue in conceptual change: What causes people to respond t( anomalous data as they do? For example, why doe an individual reject data in one instance, reinterpret data in another instance, and change theories in ye another instance? W e propose a set of eight factor that influence h ow people respond to anomalous data then we report the results of an experiment designei to investigate several of these factors.

Specificity of Practice Effects in the Classic Stroop Color-Word Task

Specificity effects of practice on the classic Stroop color-word task were explored using two different practice tasks, practice on the Stroop task itself and practice on simple color naming. Clear evidence for specificity effects was found, and this specificity persisted across a one-month delay. Stroop practice, but not color-naming practice, led to a pattern of improvement pointing to an advantage for practiced stimuli over unpracticed stimuli on both Stroop and color-naming tests but a disadvantage for practiced stimuli on reading and "reverse-Stroop" tests. The advantage for practiced stimuli was maintained on versions of the Stroop test that used orthographic manipulations of the stimuli. This pattern of specificity is inconsistent with practice as specific to the word forms. It is consistent with practice as specific to colors, to semantic meanings of the words, or to a combination of these two.

Attention and Awareness in Sequence Learning

How does implicit learning interact with the availability of explicit information? In a recent series of experiments, Curran & Keele (1992) demonstrated that sequence learning in a choice reaction setting involves at least two different processes, that result in differing availability of the acquired knowledge to conscious inspection, and that are differentially affected by the availability of attentional resources. In this paper, I propose a new information-processing model of sequence learning and explore how well it can account for these data. The model is based on the Simple Recurrent Network (Elman, 1990; Cleeremans & McClelland, 1991; Qeeiemans, 1993), which it extends by allowing additional information to modulate processing. The model implements the notion that awareness of sequence structure changes the task from one of anticipating the next event based on temporal context to one of retrieving the next event from short-term memory. This latter process is sensitive to the availability of attentional resources. When the latter are available, performance is enhanced. However, reliance on representations that depend on attentional resources also results in serious performance degradation when these representations become less reliable, as when a secondary task is performed concurrently with the sequence learning task.

Model Construction and Criticism Cycles in Expert Reasoning

A case study is described which documents the generation of a new hypothesis in the form of a visualizable model. It is argued that several of the processes used were neither deduction nor induction by enumeration. Rather, a new explanatory model was invented via a successive refinement process of hypothesis generation, evaluation, and modification, starting from an initial rough analogy. New predictions emerged when the subject "ran" the model. Thus it appears to be possible to investigate the model construction processes of experts through thinking aloud protocols.

Connectionism and Probability Judgement: Suggestions on Biases

In the present paper we deal with several violations of normative rules in probability judgement: the inverse-base-rate and the conjunction fallacy, among others. To reproduce these failures, a sample of subjects was asked to judge the probability of several items according to what they had learnt in a previous learning task on medical diagnosis. Attempts are made to explain the results within the connectionist framework. W e based our approach in a simple network, designed by Gluck and Bower (1988), which updates its weights using the L MS rule.

Inducing a Shift from Intutitve to Scientific Knowledge with Inquiry Training

Recent research in science education has shown that students frequently fail to understand scientific concepts and principles, e.g., photosynthesis. In the present study, elementary-school aged children were trained in how to conduct a collaborative inquiry into photosynthesis. Concept maps and comprehension pretest and posttests were used to assess the effects of the training. Students w h o had received the training had concepts maps which contained significantly more accurate scientific relational links depicting a more functional understanding of photosynthesis, and they retained more subject matter knowledge than the students w h o did not receive the training. The research supports the importance of inquiry training to facilitate conceptual understanding of scientific knowledge and emphasizes the usefulness of conceptual mapping techniques as evaluative measures of students' conceptual change.^

Toward a Model of Student Education in Microworlds

Microworlds are educational environments intended to support the student in the active exploration of a subject-matter domain. W e present preliminary work whose goal is to attain a better understanding of the educational effectiveness of microworlds through an examination of the learning processes that they exploit. The learning processes are made explicit within a computational model of the interaction between a student and a microworld for simple electrostatics. W e focus, in particular, on the implementation of an episodic memory mechanism that gives insight into the processes involved in learning from incorrect behavior.

Non-deterministic Prepositional phrase Attachment

Existing models of sentence comprehension typically adopt a deterministic approach that decides on the correct parse of a sentence. In essence, these models consist of algorithms that statically capture a priori rules for disambiguation and seldom take into account the context of interpretation. Also, their deterministic nature eliminates the possibility of recognizing a genuine ambiguity. We argue that the very essence of the quantitative model of memory we have developed, that is, its time-constrained nature, allows for a non- deterministic contextual approach to structural disambiguation. In this paper, w e focus specifically on the problem of PP (Prepositional Phrase) Attachment. More precisely, w e contend that a solution to this problem depends on the use of both a massively parallel time-constrained architecture and a quantitatively-defined context.

Tau Net: The Way to do is to be

We describe a technique for automatically adapt- ing to the rate of an incoming signal. We first build a model of the signal using a recurrent net- work trained to predict the input at some delay, for a "typical" rate of the signal. Then, fixing the weights of this network, we adapt the time con- stant T of the network using gradient descent, adapting the delay appropriately as well. W e have found that on simple signals, the network adapts rapidly to new inputs varying in rate from twice as fast as the original signal, d o w n to ten times as slow. So far our results are based on linear rate changes. We discuss the possibilities of application to speech.

Properties of the Principle-Based Sentence Processor

This paper defends a principle-based model of sen- tence processing, and demonstrates that such a model must have two specific properties: (1) it must use a partially top-down parsing mechanism, possibly re- stricted to functional structure, and (2) it must use an Active Trace Strategy, which freely posits traces before their linear string position. It is argued that both of the proposed mechanisms follow from an over- arching Principle of Incremental Comprehension.

Representation of Temporal Patterns in Recurrent Networks

In order to determine the manner in which temporal patterns are represented in recurrent neural networks, networks trained on a vari- ety of sequence recognition tasks are exam- ined. Analysis of the state space of unit ac- tivations allows a direct view of the means em- ployed by the network to solve a given prob- lem, and yields insight both into the class of solutions these networks cfm produce and h o w these will generalise to sequences outside the training set. This intuitive approach helps in assessing the potential of recurrent networks for a variety of modelling problems.

Why No Mere Mortal Has Ever Flown out to Center Field But People Often Say They Do

he past tense has been the source of considerable debate concerning the role of connectionist models in explaining linguistic phenomena. In response to Pinker and Prince (1988), several connectionist models have been developed that compute a mapping between the present tense phonological form of a verb to a past tense phonological form. Most of these models cannot distinguish between homophones such as F L Y - F L E W and FLY-FLffiD (as in "flied out"). Kim. Pinker, Prince, & Prasada (1991) have suggested that the addition of semantic information to such nets will not provide an adequate solution to this homophony problem. They showed that English speakers use derivational status, rather than semantic information in generating past tenses. W e provide evidence contradicting this account. Subjects' rated preferences for past tense forms are predicted by semantic measures; moreover, a simulation model shows that semantic distance provides a basis for learning the alternative past tenses for words such as F L Y . W e suggest a reconciliation of the two theories in which knowledge of "derivational status" arises out of semantic facts in the course of learning.

Students' Beliefs About the Circulatory System: Are Misconceptions Universal?

Misconceptions are a special case of false beliefs. They should be both robust and important to a person's belief system. Chi (1992) has asserted that in some domains, such as the circulatory system, students initial conceptions are of the same general ontological class as the textbook conceptions. They should therefore not be as robust as the initial conceptions in domains such as physics, in which the initial conception may be of the wrong class. The initial beliefs of 12 eighth grade students about the circulatory system included a variety of false beliefs. Statements of both correct and incorrect beliefs were used to generate maps of students' initial mental models. This allowed an assessment of the importance of the false beliefs. Even deeply embedded beliefs were removed by instruction. Importance was also measured by the impact of false beliefs on a pre-test, which was not significant. Resistance to instruction was tested by having students read a text. One analysis checked individual false beliefs, to see if contradiction by the text resulted in false belief removal. Beliefs which were contradicted were generally removed. These results are not consistent with the notion that students bring with them to instruction important and robust misconceptions about the circulatory system.

Integrating Learning into Models of Human Memory: The Hebbian Recurrent Network

We develop an interactive model of human mem- ory called the Hebbian Recurrent Network ( HRN ) which integrates work in the mathematical modeling of memory with that in error correcting connection- ist networks. It incorporates the Matrix Model (Pike, 1984) into the Simple Recurrent Network (SRN, El- man, 1989). The result is an architecture which has the desirable memory characteristics of the matrix model such as low interference and massive general- ization, but which is able to learn appropriate en- codings for items, decision criteria and the control functions of memory which have traditionally been chosen a priori in the mathematical memory litera- ture. Simulations demonstrate that the HRN is well suited to a recognition task inspired by typical mem- ory peiradigms. In comparison to the SRN , the HRN is able to learn longer lists, and is not degraded sig- nificantly by increasing the vocabulary size.

Learning Language via Perceptual/Motor Experiences

We postulate that early childhood language semantics is "grounded" in perceptual/motor experiences. The DETE model has been constructed to explore this hypothesis. During learning, DETE's input consists of simulated verbal, visual and motor sequences. After learning, DETE demonstrates its language understanding via two tasks: (a) Verbal-to- visual/moior association -- given a verbal sequence, DETE generates the visual/motor sequence being described, (b) Visuallmotor-io-verbal association -- given a visual/motor sequence, DETE generates a verbal sequence describing the visual/motor input. DETE ' s learning abilities result from a novel neural network module, called katamic memory. DETE is implemented as a large-scale, parallel, neural/ procedural hybrid architecture, with over 1 million virtual processors executing on a 16K processor CM - 2 Connection Machine.*

Human Benchmarks on AI's Benchmark Problems

Default reasoning occurs when the available information does not deductively guarantee the truth of the conclusion; and the conclusion is nonetheless correctly arrived at. The formalisms that have been developed in Artificial Intelligence to capture this mode of reasoning have suffered from a lack of agreement as to which non-monotonic inferences should be considered correct; and so Lifschitz 1989 produced a set of "Nonmonotonic Benchmark Problems" which all future formalisms are supposed to honor. The present work investigates the extent to which humans follow the prescriptions set out in these Benchmark Problems.

Post-encoding Verbalization Impairs Transfer on Artifical Grammar Tasks

In a series of studies, Schooler and Engstler-Schooler (1990) showed that verbalization of previously encountered non-verbal stimuli can impair subsequent memory performance. The present study investigates the possibility that the verbal disruption of non-verbal processes, called verbal overshadowing, m a y be applied to implicit learning, i.e., where what is learned is difTicult to verbalize. One frequently studied area of implicit learning is artificial grammars (e.g., Reber & Lewis, 1977). In the artificial grammar research, it has been shown that subjects can learn information about regularities in letter strings generated from a fmite state grammar, as measured by transfer tests, while being unable to usefully state what those regularities are. The apparent disparity between subjects' competent performance on artificial grammar tasks and their inabiUty to explain the rules of those tasks suggests the possibility that verbalization following memorization of artificial grammar strings may impair subjects' performance on a transfer task. In this study, subjects memorized a subset of grammatical letter strings, then half of them verbalized the rules they learned during memorization. The verbal subjects performed significantly worse than the non-verbal subjects on a transfer task, providing preliminary evidence that verbalization may impair transfer when the learned information is difficult to verbalize

A Neural Net Investigaion of Vertices as Image Primitives in Biederman's RBC Theory

Neural networks have been used to investigate some of the assumptions m a d e in Biederman's recognition by components (RBC) theory of visual perception. Biederman's RBC theory states, in part, that object vertices are critical features for the 2D region segmentation phase of human object recognition. This paper presents computational evidence for Biederman's claim that viewpoint-invariant vertices are critical to object recognition. In particular, w e present a neural network model for 2D object recognition using object vertices as image primitives. The neural net is able to recognize objects with as much as 65% mid-segment centered contour deletion, while it is unable to recognize objects with as little as 25% vertex centered deletion. In addition the neural net exhibits shift, scale and partial rotational invariance.

Children with Dyslexia Show Deficits on Most Primitive Skills

Anomalies have been found in a range of skills for children with dyslexia. The study presented here investigated performance on the full range of primitive skills for with dyslexia and normal children at ages 8, 11 and 16 years. Unexpectedly severe deficits were revealed in a range of skills, including motor skill, phonological skill, and processing speed. Overall, the performance of the 16 year old children with dyslexia was no better than that of the 8 year old normal children, with some skills being significantly worse, and some better. The results are inierpteted in terms of a developmental progression in which children with dyslexia suffer from general deficits in primitive skill learning, but are able to consciously compensate in many skills. We believe that a connectionist learning framewcxk may provide a parsimonious account of the range of deficits, providing a potential link between these difficulties in skilled performance and the underlying neuroanatomical abnormalities

Recognizing Handprinted Digit Strings: a Hybrid Connectionist/Procedural Approach

We describe an alternative approach to hand- printed word recognition using a hybrid of proce- dural and connectionist techniques. We utilize two connectionist components: one to concurrently make recognition and segmentation hypotheses, and another to perform refined recognition of segmented characters. Both networks are governed by a pro- cedural controller which incorporates systematic do- main knowledge and procedural algorithms to guide recognition. W e employ an approach wherein an image is pro- cessed over time by a spatiotemporal connectionist network. The scheme offers several attractive fea- tures including shift-invariance and retention of lo- cal spatial relationships along the dimension being temporalized, a reduction in the number of free pa- rameters, and the ability to process arbitrarily long images. Recognition results on a set of real-world isolated ZIP code digits are comparable to the best reported to date, with a 9 6 . 0 % recognition rate and a rate of 9 9 . 0 % w h e n 9.5% of the images are rejected.

A Model-based Approach to Learning from Attention-focusing Failures

In this paper we present a theory of how machines can learn from attention focusing failures. Our method requires that learning mechanisms have available a detailed model of decision-making mechanisms they are to modify; it is therefore central to this research to develop and present such a model. The portions of our developing model presented below concern those parts of a decision-making apparatus that should be approximately the same & somone agent to another. Though learning mechanisms would have to be sensitive to both the idiosyncratic and agent-invariant elements of aji cidaptable decision architecture, w e have concentrated on the invariant elements, which provide the most general constraints on learning.

Toward Formalizing Dialectical Argumentation

We explore the use of argumentation for justifying claims reached by plausible reasoning methods in domains where knowledge is incomplete, uncertain, or inconsistent. We present elements of a formal theory of argumentation that includes two senses of argument, argument as supporting explanation and argument as dialectical process. W e describe a partial implementation of the theory, a program that generates argument structures that organize relevant, available, plausible support for both a claim and its negation. Then we describe a theory of argument as dialectical process, where the format of a two-sided argument is used to intertwine the strengths and weaknesses of support for competing claims, so arguments can be refuted and directly compared.

Match and Mismatch in Phonological Context

Earlier research suggests that the lexical access process in humans is highly intolerant of mismatch. When heard in isolation, a sequence such as [wikib] (wickib) is unacceptable as a token of wicked even though the mismatch is minimal. This observation is apparently inconsistent with the vulnerability of natural speech to phonological change. For example, the word wicked m a y be phonetically realised as [wikib] in the context of "wicked prank". This is an example of place assimilation, where the underlying /d/ in wicked acquires a labial character from the foUowing labial consonant. The cross-modal priming experiments reported here test the hypothesis that phonologically regular variation does not in fact create mismatch when it occurs in the appropriate context. Subjects heard tokens like [wikib] embedded in either phonologically viable context (where the following word was prank) or in unviable context (where the following word was game ) . When heard in unviable context, the distorted tokens produced a strong mismatch effect In contrast, the distorted tokens in viable context primed as strongly as the undistorted words. These results suggest that the on-line processes of speech interpretation and lexical access must perform some kind of phonological inferencing when interpreting speech at the lexical level.

Learnability and Markedness: Dutch Stress Assignment

This paper investigates the computational grounding of learning theories developed within a metrical phonology approach to stress assignment. In current research the Principles and Parameters approach to learning stress is pervasive. W e point out some inherent problems associated with this approach in leaxning the stress sys- tem of Dutch. T h e paper focuses on two specific aspects of the learning task: w e empirically investigate the eflFect of input encodings on learnability, and w e exam- ine the possibility of a data-oriented ap- proach as an alternative to the Principles and Parameters approach. W e show that a data-oriented similarity-based machine learning technique (Instance-Based Learn- ing), working on phonemic input encodings is able to learn metrical phonology abstrac- tions based on concepts like syllable weight, and that in addition, it is able to extract generalizations which cannot be expressed within a metrical framework.

The Comprehension of Complex Graphics: Facilitating Effects of Text on Integration and Inference-Making

The goal of the present research was to investigate the effects on comprehension and inference-making when prior knowledge about a building was manipulated by means of a text. The text presented an expert-like "walk- through" description of the building, as well as exemplars of the eight types of semantic information previously fovmd to be employed in the comprehension of architectural plans. This was motivated by previous research which: 1) found that the nature of the encoding process is related to both specific prior knowledge of the building and to expertise, and 2) suggested that experts' representations of the building included much more 3-dimensional infonnation whereas sub-experts' representations of the building were much more similar to the 2- dimensional plans used to depict the building. Results indicated that the text had positive effects on specific types of semantic information acquired about the building, and that inferences on this information permitted the development of mental models which included a greater number of 3-dimensional aspects of the building. There were also important findings related to expertise which suggest that the search, pattern-recognition, and inference-making operators applied by novices were different from those applied by experts.

A Computer Model of Chess Memory

Chess research provides rich data for testing computational models of human memory. This paper presents a model which shares several common concepts with an earlier attempt (Simon & Gilmartin, 1973), but features several new attributes: dynamic short-term memory , recursive chunking, more sophisticated perceptual mechanisms and use of a retrieval structure (Chase & Ericsson, 1982). Simulations of data from three experiments are presented: 1) differentia] recall of random and g a m e positions; 2) recall of several boards presented in short succession; 3) recall of positions modified by mirror image reflection about various axes. The model fits the data reasonably well, although some empirical phenomena are not captured by it. At a theoretical level, the conceptualization of the internal representation and its relation with the retrieval structure needs further refinement.

Associating Wat and Where Using Temporal Cues

Johansson showed that people can recognize hunicin gaits from brief presentation of only a few moving dots. A recently constructed connectionist model, MARS , is the first program of any type to model this phenomenon. One of the key ideais is that an associ- ation is formed between visual actions and spatial lo- cations. Simulations show that in MARS the cissocia- tion mechanism is necessary for reliable recognition of multiple actions, and that the action-recognition pro- cess and the location association process ax;t in con- sort to arrive at a stable interpretation of the image sequence. Association between location and action is performed in a spatiotopic network of cells that spe- cialize in detecting temporal synchrony between vi- sual events in the scene and predictions generated by active models of actions held in memory . The model suggests that such a mechanism may be used to build and maintain associations acquired sequentially.

From Models to Cases: Where Do Cases Come From and What Happens When A Case is Not Available

The origin of cases is a central issue in cogni- tive models of case-based reasoning. Some recent work proposes the use of weak methods for gener- ating solutions when a relevant case is not avail- able, and chunking the solutions into cases for potential reuse. Our theory of case-based spsi- tial planning and navigation suggests a different approach in which mental models of the world provide a way for solving new problems and ac- quiring cases. These mental models also pro- vide a scheme for organizing the case mmeory, adapting old cases, and verifying new plans. The use of multiple methods, such as the case-based and model-based methods, raises another impor- tant issue in reasoning, namely, how to oppor- tunistically select and dynamically integrate the methods. Our theory sujjgests the use of simple meta^reasoning to recursively select an appropri- ate method as the problem is decomposed into subproblems. This leads to the dynamic integra-tion of different methods where one method is used for one subproblem and a different method for another subproblem.

Modularity and the Possibility of a Cognitive Neuroscience of Central Systems

The methodology of cognitive neuroscience presup- poses that cognitive functions are modular. Fodor (1983) offered an interesting charactoization of various forms of modularity and an argument to the effect that while language and input systems are probably modu- lar, higher cognitive processes such as problem solv- ing probably are not. If this is the case, there will be methodological obstacles in developing a neuroscience of higher cognitive functions. We offer an analysis of the issue of modularity as it affects the cognitive sci- ences, evaluate Fodor's characterization with respect to this analysis, and suggest that his argument for the nonmodularity of central systems has a very narrow scope. It is not something that neuroscience needs to necessarily worry about.

A Psychometric PDP Model of Temporal Structure in Story Recall

A new parallel distributed processing (PDP) model possessing a statistical interpretation is proposed for ex- tracting critical psychological regularities from the tem- poral structure of human free recall data. The model is essentially a non-linear five parameter Jordan sequen- tial network for predicting categorical time-series data. T h e model consists of five parameters: an episodic strength parameter (t/), a causal strength parameter {0), a shared causal/episodic strength parameter (7), a work- ing memory span parameter (//), and a number of items recalled parameter (A). T h e "psychological validity" of the model's parameter estimates were then evaluated with respect to the existing experimental literature us- ing children and adult free recall data from four stories. The model's parameter estimates replicated and ex- tended several previously known experimental findings. In particular, the model showed: (i) effects of causal structure /3, (ii) showed a decrease in (7/4-7) while /? remained constant as retention interval increased, and (iii) an increase in {r]+y) whileftremained constant as subject age increased.

Intentions in Time

Representing and reasoning about goal-directed actions is necessary in order for autonomous agents to act in or understand the commonsense world. This paper provides a formal theory of in- tentional action based on Bratman's characteriza- tion of intention [Bratman, 1987, Bratman, 1990]. Our formalization profits from the the formaliza- tion of Bratman's theory developed by Cohen and Levesque [l990a, 1990bJ. W e review their formal- ization and illustrate its weaknesses. Using Allen's temporal logic [Allen, 1984] , w e construct a for- malization that satisfies Bratman's desiderata for an acceptable theory of intentional action. W e in- troduce a characterization of success and failure of intentional action and show that our richer theory of time allows us to formalize more complex inten- tional actions, particularly those with deadlines. Finally, we argue that the use of a syntactic theory of belief allows us to accommodatea more descrip- tive theory of intentional action by fallible agents. Our work has relevance to multi-agent planning, speech-act processing and narrative understand- ing. W e are using this theory to representing the content of narratives and to constructing and un- derstanding description-based communication.

Evidence for Interrelated and Isolated Concepts from Prototype and Caricature Classifications

Previous research (Goldstone. 1991) has suggested that concepts differ In their degree of dependency on other concepts. While some concepts' characterizations depend on other sinxjitaneously acquired concepts, other concepts are relatively isolated. The current experiments provide a new measure of a concept's inten-elatedness/isolation. It Is assumed that If the prototype of a concept is classified with greater accuracy than a caricature, then the concept is relatively independent of the influences of other concepts. If a caricature is nrwre easily categorized than the prototype, then the concept is relatively dependent on other concepts. If these assumptions are made, then the cun-ent experiments provide converging support for a interrelated/isolated distinction. Instnjcting subjects to form images of the concepts to be acquired, or infrequently alternating categories during presentation, yields relatively isolated concepts. Instructing subjects to try to discriminate between concepts, or frequently altemating categories, yields relatively interrelated concepts.

Understanding Symbols: A Situativity-Theory Analysis of Constructinf Mathematical Meaning

We report analyses of the construction and interpretation of mathematical symbols that refer to quantitative properties and relations of a physical system. Middle-school students solved problems that involved contracting tables, equations and graphs to represent linear functions of a device where blocks are moved varying distances by turning a handle that winds string around spools of different sizes. Previous research analyzed activities of reasoning about quantities of this system as attunement to constraints and affordances, a characterization of students' imphcit understanding of concepts of variable and linear functions. This report concerns activities of representing quantitative properties and relations using mathematical notations. W e are developing analyses of constructing and interpreting tables, equations, and graphs in terms of attunement to constraints and affordances of the represented system, the system of notations, and relations between the constraints of the notations and the represented domain. W e present examples that illustrate concepts of semantic clumps, groups, and morphisms; descriptive and demonstrative representations; multiple referent domains; and constructions of meaning in contributions to conversational discourse.

Is the Phonoogical Loop Articulatory or Auditory?

The paradigm of immediate serial recall (Baddeley, 1986) has been used extensively in investigation of working m e m o r y , but its relation to and implica- tions for the nature of phonological processing have seldom been examined. W e show thatfindings from this domain can be interpreted in two ways, and relate these two interpretations to a simple model of phono- logical processing. One interpretation emphasizes the availability of infonnation from "output" phono- logical processing to "input" phonological process- ing, while the alternative account stipulates no such connections. On the basis of an experimental study designed to choose between the two accounts, w e tentatively conclude that the interpretation suggest- ing output-input connectivity is supported. Estab- lishment of this result would be of considerable in- terest, since it indicates that processes in language production can impact directly on processes in lan- guage perception.

Exploring the Nature and Development of Phonological Representations

Findings in infant speech perception suggest that early phonological perceptions m a y be syllabic in na- ture, and that there is a loss of sensitivity to nonnative contrasts toward the end of thefirst year of life. W e present a neural network model that simulates these two phenomena. In addition, the model and simula- tions (1) demonstrate h o w information about stress can be utilized in generating syllable-like percep- tions; (2) provide a simple means of extracting static representations firom a dynamic and co-articulated signal; and (3) indicate that the development of "at- tnictor" states may be necessary in network models of these phenomena.

Analogical Similarity: Performing Structure Alignment in a Connectionist Network

We describe a connectionist network that performs a com- plex, cognitive task. In contrast, the majority of neural net- work research has been devoted to connectionist networks that perform low-level tasks, such as vision. Higher cogni- tive tasks, like categorization, analogy, imd similarity may ultimately rest on alignment of the structured representa- tions of two domains. W e model human judgments of simi- larity, as predicted by Structure-Mapping Theory, in the one-shot mapping task. W e use a localist connectionist representation in a Maricov Random Field formalism to perform cross-product matching on graph representations of propositions. The network performs structured analo- gies in its domain flexibly and robustly, resolving local and non-local constraints at multiple levels of abstraction.

From Weared to Wore: A Connectionist Account of Language Change

This paper describes a technique developed for modeling historical change in connectionist networks, and Wefly reviews previous work applying that technique to the problem of the historical development of the regular verb system from early to late Old English. W e then broaden the scope of the simulations and ask what the effect would be of having such changes occur in a single mechanism which is processing both the regular and inegular verbs, and extended over a longer time period. A s w e shall see, the results are highly consistent with the major historical developments. Furthermore, the results are readily under- stood in terms of network dynamics, and provide a ratio- nale for the shape of the attested historical changes.

Memory Use During Hand-Eye Coordination

Recent successful robotic models of complex tasks are characterized by use of deictic primitives and frequent access to the sensory input. Such models require only limited memory representations, a well- known characteristic of human cognition. We show, using a sensori-motor copying task, that human performance is also characterized by deictic strategies and limited memory representations. This suggests that the deictic approach is afruitful one for under- standing human brain mechanisms; it also suggests a computational rationale for the limitations on human short term memory

Incremental Syntax Processing and Parsing Strategies

Psycholinguistic models of language processing usually postulate that parsing the syntactic structure of a sen- tence proceeds incrementally in s o m e way, which means that the syntactic analysis is not delayed until the end of the clause or sentence. In this paper w e will discuss dif- ferent conceptions of incrementality in the light of em- pirical studies on the influence of grammatical case on structure building in German subject-object asymme- tries. It will be shown that neither word-by-word attach- ment of partial structures into the phrase marker of the sentence (Frazier, 1987a), nor head comer parsing (Ab- ney, 1987; Kay, 1989) can explain the data found in our experiments. A s a strategy which is consistent with our data, left-comer parsing (Johnson-Laird, 1983) will be discussed.

Increases in Cognitive Flexibility over Development and Evolution: Candidate Mechanisms

Chimpanzees, monkeys and rats are disoriented, they reorient themselves using geometrical features of their environment (Tinkelpaugh, 1932; Cheng, 1986; Margules & Gallistel, 1988) In rats this ability appears to be modular, impervious to nongeometric information (e.g. distinctive colors and odors) marking important locations (Cheng, 1986; Margules & Gallistel, 1988) I tested young children and adults in an orientation task similar to that used with rats (Hermer & Speike, under revievyr) Whereas adults readily used both geometric and nongeometric information to orient themselves, young children, like rats, used only geometric information. These findings provided the first evidence that humans, like many other mammals , orient by using environmental shape; that the young child's orientation system, like that of rats, is informationaily encapsulated (Fodor, 1983); and that in humans the apparent modularity of this system is overcome during development

Recognition-based Problem Solving

This paper describes a space of possible models of knowledge-lean human problem solving characterised by the use of recognition knowledge to control search. Recognition-based Problem Solvers (RPS) are contrasted to Soar and ACT - R which tend to use large goal stacks to control search and to situated theories of cognition that tend not to be able to do search at all (e.g. Pengi). It is shown that with appropriate knowledge increments R P S can apply algorithms such as depth-first search with a bounded demand on Working Memory . The discussion then focuses on h o w some weak methods, such as depth- first search, are more difficult to encode in R P S than others. It is claimed that the difficulty of encoding depth-first reflects human performance.

Working Memory Failure in Phone-Based Interaction

This paper investigates working memory failure in menu driven Phone-Based Interaction (PBT). W e have used a computational model of Phone-Based Interaction (PBI USER ) to generate predictions about the impact of three factors on W M failure: PBI features (i.e., m e n u structure), individual differences (i.e., W M capacity) and task characteristics (i.e., task format and number of tasks). Our computational model is based on the theory of W M proposed by Just and Carpenter (1992). This theory stipulates that the storage and the processing of information generate demands for W M resources. Our empirical results provide strong evidence for the importance of storage demands, and moderate evidence for the importance of processing demands as predictors of W M failure in PBI. In addition, our results provide evidence for the importance of individual differences in W M capacity as a predictor of W M failure in PBI. Finally, our results indicate that, contrary to general guidelines for the design of PBI, deep menu hierarchies (no more than three options per menu ) do not reduce WM error ratesin PBI.

The Use of Hints as a Tutorial Tactic

Hints are a useful and common pedagogical tactic, particularly in one-on-one tutoring sessions. Hints can serve: (1) to activate otherwise inert knowledge making possible its recall, or (2) to stimulate the generation of inferences required to complete a task using knowledge thought to be available to the student. W e report on the analysis of one-on-one tutoring sessions conducted by two tutors using a computer system to capture the dialogue. Hints either expUcitly convey information to the student or they point to information presumed to be available to the student. W e have identified at least 10 different forms of hints used by our tutors. T h e two tutors differ in the total n u m b e r of hints they generate and in the prevalence of hints of different types. Our results are being applied to the design of an intelligent tutoring system (ITS) with a natural language interface.

Rapid Unsupervised Learning of Object Structural Descriptions

A single view of an unfamiliar object typically provides enough information about the object's shape to permit recognition from a wide range of n e w viewpoints. A recent model by H u m m e l and Biederman (1990, 1992) provides a partial account of this ability in terms of the activation of viewpoint invariant structural descriptions of (even unfamiliar) objects. W e describe the Structural Description Encoder (SDE), a self-organizing feed-forward neural network that learns such descriptions in one or at most two exposures. Rapid, reliable learning results from the interactions among recruited and unrecruited units, whose response characteristics are differentiated through the use of dynamic thresholds and learning rates.

The Learning of Weak Noun Declension in German: Children vs. Artificial Network Models

Different artificial networks are presented with the task of learning weak noim declension in G e r m a n . This morphological rule is difficult for cue-based models because it requires the resolution of conflicting cue-predictions and a dynamic positional coding due to suffixation. In addition to that its 'task frequency* is very low in natural language. This property is preserved in the training input to study the models' abilities to handle low frequency niles. The performances of three kinds of networks: 1) feedforward networks 2) recurrent networks 3) recurrent networks with short term memory ( S T M ) capacity are compared to empirical findings of an elicitation experiment with 129 subjects of ages 5-9 and adult age.

Correcting Causal Explanations in Memory

Several lines of research have suggested that information previously integrated into memory can influence inferences and judgments, even w h e n more recent information discredits it. A first experiment tested the prediction that information providing causal structure, versus being mentioned but otherwise unintegrated into the account, would lead to more influence, and found that subjects used both discredited and valid information affording causal structure to make inferences, but not incidentally mentioned information with the same content Experiment 2 found that w hen a plausible causal alternative accompanied the correction, subjects showed less influence from the discredited information than when the correction simply negated earlier information. The findingssuggest that the continued influence of discredited information depends on the causal structure it affords

Effects of Object Structure on Recognizing Novel Views of Three-Dimensional Objects

Can observers recognize novel views of three-dimensional (3-D) objects, created by rotations in depth from a single familiar view? Three experiments using 3-D model objects are reported demonstrating that: (a) subjects can indeed recognize novel views under these circumstances, and (b) recognition accuracy depends on the types of objects employed. More precisely, subjects successfully recognized geometricallyregular and irregular objects rotated by 180 degrees about the vertical ^ ) axis. However, only geometricallyregular objects w o e recognized w h e n rotated similarly by 9 0 degrees. Thesefindingscannot be easily accommodated by contemporary object-centered or viewer-centered theories of shape-based object recognition, which m a k e no provisions for rqnesenting different types of objects uniquely. Alternatively, thesefindings support a theory in which inferences about objects' 3-D shapes are generated from information implicit in their two- dimensional (2-D) bounding contours, or silhouettes (Johnson, 1993). Such inferences m a y be premised on rules that C2q)ture importantregularities between 2- D bounding contours and 3-D surface geometry (e.g., Beusmans, Hofiman, & Bennett, 1987; Richards, Koenderink, & Hoffinan. 1987).

Contingent Freqiency Effects in Syntactic Ambiguity Resolution

We investigated contingent frequency effects in syntactic ambiguity resolution in three self-paced reading experiments. Experiment 1 demonstrated that the frequency with which that occurs as a determiner or a complementizer in different syntactic environments predicts readers initial parsing preferences. Experiments 2 and 3 demonstrated frequencyand regularity effects for reading the and that after different types of verbs that are similar to effects that have been well-documented in word recognition.

Principal Hidden Unit Analysis: Generation and Interpretation of Principal Networks by Zminimum Entropy Method

In the present paper, a principal hidden unit analysis with entropy minimization is proposed to obtain a simple or fundamental structure from original complex structures. The principal hidden unit analysis is com- posed of four steps. First, entropy, defined with respect to the hidden unit activity, is minimized. Second, several principal hidden units are selected, according to il-index, rep- resenting the strength of the response of hid- den units to input patterns. Third, the per- formance of the obtained principal network is examined with respect to the error or generalization. Finally, the internal representa- tion of the obtained principal network must appropriately be interpreted. Applied to a rule-plus-exception, a symmetry problem and an autoencoder, it was confirmed in all cases that by using entropy method, a small num- ber of principal hidden units were selected. With these principal hidden units, principal networks were constructed, producing targets almost perfectly. The internal representation could easily be interpreted especially for simple problems

Changes in Children's Conceptual Models of a Natural Phenomenon Using a Pictorial Complex Computer Simulation as a Tool

This paper describes an investigation examining the constructing of a conceptual model of a selected natural phenomenon by children when using a pictorial computer simulation of that phenomenon. The paper concentrates on describing changes in children's conceptual models which appeared after an independent and spontaneous exploration process. The selected natural phenomenon was the variations of sunlight and heat of the sun as experienced on the earth related to the positions of the earth and the sun in space. Before the exploration of the natural phenomenon with the pictorial computer simulation children's conceptual models were at very different levels. S o m e children's conceptual models of the phenomenon were quite unidentified, and some others' very developed. Only some children's conceptual models contained misconceptions. The most significant change in children's conceptual models was that the interconnections of different things and phenomena began to be constructed and the construction seems to be in the direction of the currently accepted scientific knowledge. According to these findings it seems to be possible that an independent exploration by means of a pictorial computer simulation of a given natural phenomenon at a very early stage, w h e n children are spontaneously interested in those things, could help children in the formation of a correctly directed conceptual model of that phenomenon.

Musical Pleasure Through the Unification of Melodic Sequences

The purpose of this paper is to extend an earlier theory of pleasure associated with harmonic sequences to melodic sequences. The theory stated that the sequences that will be pleasurable will be the ones that allow a coherent transition from one mental state to the next. This can be measured in a connectionist model by noting the strength of the activation boost in a competitive layer categorizing the elements of the sequence. It is suggested that a similar mechanism will work for melodic sequences if two conditions are met. First, the melodic sequences must be represented such that two sequences judged to be similar by the ear have an overlapping distributed representation. Next, a mechanism must be posited to separate melodic sequences into significant groups. The results of a network accomplishing these tasks are presented.

Apparent Computational Complexity in Physical Systems

Many researchers in AI and Cognitive Science believe that the information processing complexity of a mecha- nism is reflected in the complexity of a description of its behavior. In this paper, w e distinguish two types of com- plexity and demonstrate that neither one can be an objective property of the underlying physical system. A shift in the method or granularity of observation can cause a system's behavioral description to change in both the number of apparent states and the complexity class. These examples demonstrate h o w the act of obser- vation itself can suggest frivolous explanations of physi- cal phenomena, up to and including computation.

The Role of Curvature in Representing Shapes for Recognition

Attneave (1954) claimed that approximations made by connecting the points of mximum curvature ( MAX points) in a picture were necessary and sufQcient for representing shapes for recognition. L o w e (1985) in turn argued that an equally sufficient representation is created by connecting points of m i n i m u m curvature ( MIN points); hence MAX points are not necessary. However, both Attneave and L o w e neglected the role of curvature concentration in their arguments. It is hypothesized here that for shapes with curvature concentrated at a small number of points, MAX point pictures are far better representations than MIN pictures. More general^,ttiemore curvature was concentrated in fewer points, the greater the advantage of MAX figures over MIN figures in recognizability. This hypothesis was experimentally verified; s o m e implications for shape representation are discussed.

Modeing Melodic Expectation: Using Three "Musical Forces" to Predict Melodic Continuations

Part of what we call "expression" or "espressive meaning" in music m a y be regarded as an emergent property of the interaction of "musical forces" that I call gravity, magnetism, and inertia. These forces are implicit in Gestalt psychological principles of perceptual organization, current theories of tonal music, and recent experimental work in psychoacoustics. A n explicit account of their operation and interaction allows us to predict which patterns of musical motion trained listeners will tend to expect in tonal music. A computer program called What Next models the operation of these forces. Given a string of melodic pitches in a specific tonal context. What next lists predicted continuations. A comparison of these predictions with the results of an experiment (Lake 1987), in which trained listeners were given a string of melodic pitches in a specific tonal context and asked to sing a continuation, suggests that the forces modeled have cognitive significance and explanatory power.

A Connectinonist Implementation of the ACT-R Productino System

This paper describes a connectionist implementation of the ACT-R production system. Declarative knov^ledge is stored as chunks in separate associative memories for each type. Procedural knowledge consists of the pattern of connections between the type memories and a central memory holding the current goal. ACT-R concepts such as adaptive learning and activation-based retrieval and matching naturally map into connectionist concepts. The implementation also provides a more precise interpretation for issues in ACT-R such as time of memory retrieval and production firing, retrieval errors and partial matching. Finally, the implementation suggests limitations on production rule structure

Making Mathematical Connections Through Natural Language: A Computer Model of Text Comprehension in Arithmetic Word Problem Understanding

Understiinding arithmetic word problems involves a complex interaction of text comprehension jmd mathematicjd processes. This work presents a com- puter model of the hypothesized processes that are required of a young student solving arithmetic word problems, including the processes of sentence-level reading and text integration. Unlike previous com- puter simuLitions of word problem solving, which neglect the early stages of text processing, this model forces a detailed consideration of the linguistic pro- cess, which is being increasingly recognized as a pri- mary source of difficulty. Experiments were con- ducted to isolate critical text comprehension processes. Children's probability of solution was jinalyzed in regression jinalyses as a function of the model's text comprehension processes. A variable measuring the combined effects of the load on working m e m o r y and text integration inferences accounted for a significant runount of variance across four grade levels (K-3). The results suggest n e w process-oriented measures of determining w h y a particular word problem m a y be difficult, especially for young students. A n implica- tion for education is the potential for a difficulty- differentiated network of problems that includes a multiple number of rewordings for each "traditionjU" problem wording as an iiid for classroom assessment and future computer-based learning environments.

The Effect of Experience on Across-Domain Transfer of Diagnostic Skill

Transfer across domains has been generally difficult to find. Recent studies have indicated that abstract skills m a y transfer if adequate task analyses are used to define the target skill and people receive the proper training in attaining the skill. This study examined transfer of diagnostic skill across domains for experienced subjects (extensive programming experience but no electronics) aiid inexperienced subjects (no [H-ogramming or electronics experience) when domain-specific information was provided. Four levels of diagnostic skill were identified. Inexperienced subjects could solve problems but did not display an advanced level of diagnostic skill in either domain. However, all experienced subjects displayed high levels of skill on most problems, both in the domain of expertise and in the domain in which they were inexperienced. Results suggest that a general diagnostic skill can transfer spontaneously across domains with extensive practice in one domain and is not acquired to an advanced level without training.

Explaning Language Universals In Connectionist Networks: The Acquisition of Morphological Rules

Across languages there are certain characteristics which they share. Linguists, trying to explain lan- guage universals, have c o m e up with different theo- ries: They argue for (1) the innateness of general linguistic principles, (2) the communicative func- tions reflected in linguistic structure, (3) the psy- chological d e m a n d s placed upon language users, or (4) grammar-internal explanations. This paper tries to explain s o m e of the morphological universals in the framework of a connectionist network, support- ing the third approach. Employing simple recurrent networks, a series of experiments were done on var- ious types of morphological rules. T h e results show that the model's performance mirrors the extent to which the different types of rules occur in natural languages. The paper explains how the model has discovered these universals.

Thinking With a Mouse

Isomorphic problems are to cognition what optical illusions are to perception. By drawing attention to anomalies such as problems which are identical in form but vary widely in difficulty they highlight cognitive processes normally hid- den a m o n g the minutiae of our theories. Results are reported from an experiment in which sub- jects solved a three disk Tower of Hanoi problem and its Monster Globe change isomorph using direct manipulation tableaus or paper and pencil. Subjects using direct manipulation were found to solve the Monster Globe problem in half the time taken by paper and pencil subjects. A n explana- tion revolving around attunement to environmen- tal constraints is advanced to account for this difference.

Real-time Control of Animated Broad Agents

As autonomous agents' interactions with humans become richer, w e believe it will become increasingly important for s o m e of the agents to have believable and engag- ing personalities. In previous papers w e have described Tok, a broad agent architecture which integrates reactiv- ity, goal-directed behavior, emotion and s o m e m e m o r y and inference for agents in non-real-time worlds. In this paper w e discuss the issues raised w h e n w e extend Tok to work in real-time, animated domains. Convincing animated motion poses three challenges to the architec- ture: multipleprimitiveactions and higher level activities must be executed simultaneously; future actions must be k n o w n before current actions complete, to enable smooth animation; and the mind must be fast enough to provide the impression of awareness. Here w e describe Hap, the reactive substrate of Tok, and its approaches to these challenges. T h e described architecture was used for the creation of three agents, called woggles, in a world titled Edge of Intention, which was firstshown at the A A A I - 9 2 Al-based Arts Exhibition.

German Inflection: The Exception That Proves The Rule

Connectionist models of language equate default inflection {e.g., fax-faxed) with high frequency, while symbolic models compute regular inflection through the application of a mental rule which is independent of high frequency. The German -S plural is low frequency (7.2% of types) but in an experiment with novel nouns, w e show that -s behaves as a default. This argues against the connectionist model of inflection, but in favor of symbolic models.

A Connectionist Model of the Development of Seriation

Seriation is the ability to order a set of objects on some dimension such as size. Psychological research on the child's development of seriation has uncovered both cognitive stages and perceptual constraints. A generative connectionist algorithm, cascade- correlation, is used to successfully model these psychological regularities. Previous rule-based models of seriation have been unable to capture either stage progressions or perceptual effects. T h e present simulations provide a number of insights about possible processing mechanisms for seriation, the nature of seriation stage transitions, and the opportunities provided by the environment for learning about seriation.

All Differences are not Created Equal: A Structural Alignment View of Similarity

An emerging view in cognitive psychology is that the determination of similarity involves a comparison of structured representations. O n this view, s o m e differences are related to the commonalities of a pair {alignable differences) and others are unrelated to the commonalities of a pair {nonalignable differences). Previous evidence suggests that pairs of similar items have more commonalities and alignable differences than do pairs of dissimilar items. Structural alignment further predicts that alignable differences should be easier to findthan nonalignable differences. T a k e n together, these assertions lead to the counterintuitive prediction that it should be easier to find differences for similar pairs than for dissimilar pairs. This prediction is tested in two studies in which subjects are asked to list differences for as m a n y word pairs as possible in a short period of time. In both studies, more differences are listed for similar pairs than for dissimilar pairs. Further, similar and dissimilar pairs differ in the number of alignable differences listed for them, but not in the number of nonalignable differences listed for them. These studies provide additional support for the structural alignment view of similarity.

On the Long-Term Retention of Studied and Understudied U.S. Coins

The present study addresses the issue of whether visual information is retained well or not, using the Nickerson and Adams (1979) familiar task of recalling a U.S. penny. Although Nickerson and Adams ' findings suggested poor retention of visual detail, earlier recognition memory studies suggested very good retention. A n unfamiliar liberty dime was used to assess the durability of a one-minute study period for an unfamiliar coin. Recall |}erformance on the unfamiliar dime was better than recall performance on the familiar penny even when the test on the dime was delayed for one week. The order in which recall of the penny or dime occurred significantly affected performance with the prior unaided recall of the penny enhancing the subsequent recall of the studied dime. These findings document the importance of intentional study on memory for details of a common object and suggest that with intentional study good retention can be obtained for visual details of such objects.

Inflectional Morphology and Phonological Regularity in the English Mental Lexicon

We used a cross-modal repetition priming task to investigate the mental representation of regular and irregular past tense forms in English. Subjects heard a spoken prime (such as walked) immediately followed by lexical decision to a visual probe (such as walk). W e contrasted three types of English verbs, varying in the phonological and morphological regularity of their past tense inflection. These were (i) Regular verbs {jump/jumped), with the regular 1-61 inflection and no stem change, (ii) Semi-Weak verbs {burn/burnt, feel/felt), with irregular alveolar inflection and some phonologically regular stem vowel change, and (iii) Vowel Change verbs {sing/sang, give/gave), which mark past tense through phonologically irregular changes in the stem vowel. T h e stem forms of these verbs were presented in three prime conditions ~ preceded either by the Identity Prime, a Past Tense Prime, or a Control Prime. T h e Identity Prime significantly facilitated lexical decision responses for all three verb classes, but the Past Tense Prime, while significantly facilitating responses in the Regular verb class, produced no overall effect for the Semi-Weak verbs, and significant interference for the Vowel Change verbs. W e conclude that phonological irregularity in the relation between a stem and an inflected form can lead to very different lexical structures than w e find for more regular phonological relationships.

Schema-based Categorization

Many theories of conceptual organization assume the existence of some form of mental similarity metric (Medin and Schaffer, 1978; Hintzman and Ludlum, 1980; Nosofsky, 1988; Shepard, 1987; Kruschke, 1992, among others.) In the domain of categorization, such theories have been called "similarity-based" (Murphy and Medin, 1985). Criticism of similarity-based the- stories has led to a call for "theory-based" models of categorization (Murphy and Medin, 1985; Rips, 1989; Barsalou, 1991; Medin, 1989). Theory-based views remain somewhat vague, however. In this paper I outline a schema-based theory of conceptual organi- zation. T h e model depends on the notion of amen - tal similarity metric but makes use of connectionist learning principles to develop a conceptual organiza- tion that solves a problem faced by purely similarity- based models of categorization. I discuss the rela- tionship of this theory to similarity-based and theory- based accounts.

Kin Recognition, Similarity, and Group Behavior

This paper presents an approach to describing group behavior using simple local interactions among individuals. We propose that for a given domain a set of basic interactions can be defined which describes a large variety of group behaviors. The methodology we present allows for simplified qualitative analysis of group behavior through the use of shared goals, kin recognition, and minimal communication. W e also demonstrate h o w these basic interactions can be simply combined into more complex compound group behaviors. To validate our approach we implemented an array of basic group behaviors in the domain of spatial interactions among homogeneous agents. We describe some of the experimental results from two distinct domains: a software environment, and a collection of 20 mobile robots. We also describe a compound behavior involving a combination of the basic interactions. Finally, we compare the performance of homogeneous groups to those of dominance hierarchies on the same set of basic behaviors.

Using Case-based Reasoning and Situated Activity to Write Geometry Proofs

As models of human cognition, previous geometry theorem-proving programs were inappropriately influenced by the ease with which computers manipulate syntactic formulae. The failure of those programs to pay attention to h u m a n perception d o o m e d them as models of h o w humans solve geometry proof problems. Just as the study of theorem-proving once evolved into the study of planning, it is time n o w for theorem- proving to incorporate current ideas in the planning community. A close examination of what h u m a n s do w h e n they try to solve geometry proof problems, and of h o w geometry is taught, reveals an emphasis on chunks of problem-solving knowledge derived from examples, retrieved on the basis of visual cues. These ideas are characteristic of the case-based reasoning and situated activity approaches in planning. This paper concludes with a brief description and trace of a computer program, POLYA , which does reactive, memory-based geometry theorem-proving.

APECS: A Solution to the Sequential Learning Problem

This paper contains some modifications to Back Propagation that aim to remove one of its failings without sacrificing power. Adaptively Parametrised Error Correcting Systems ( APECS ) are shown not to suffer from the sequential learning problem, and to be capable of solving EOR , higher order parity, and negation problems. This opens the w a y to development of connectionist models of associative learning and memory that do not suffer from "catastrophic interference", and may shed light on issues such as the episodic / semantic memory distinction.

Modeling Property Intercorrelations in Conceptual Memory

Behavioral experiments have demonstrated that people encode knowledge of correlations among semantic prop- erties of entities and that this knowledge influences per- formance on semantic tasks (McRae, 1992; McRae, de Sa, & Seidenberg, 1993). Independently, in connectionist the- ory, it has been claimed that relationships among seman- tic properties may provide structure that is required for the relatively arbitrary mapping from word form to word meaning (Hinton&Shallice, 1991). W e explored these is- sues by implementing a modified Hopfield network (1982, 1984) to simulate the computation from word form to meaning. The model was used as a vehicle for developing explanations for the role played by correlated properties in determining short interval semantic priming effects and in determining the ease with which a property is verified as part of a concept. Simulations of the priming and property verification experiments of McRae (1992) are reported. It is concluded that correlations among properties encoded in conceptual memory play a key role in the dynamics of the computation of word meaning. Furthermore, a model in which property intercorrelations are central to forming basins of attraction corresponding to concepts may pro- vide important insights into lexical memory.

Emergent Control and Planning in an Autonomous Vehicle

We use a connectionist network trained with rein- forcement to control both an autonomous robot ve- hicle and a simulated robot. We show that given appropriate sensory data and architectural struc- ture, a network can learn to control the robot for a simple navigation problem. We then investigate a more complex goal-based problem and examine the plan-like behavior that emerges.

Strategic Social Planning - Looking for Willingness in Multi-Agent Domains

This paper deals with the use of social knowledge by an autonomous agent which is planning its behavior and in particular discovering it is in need of help. What we aim at showing is the possible insertion of knowledge about dependence relations in an agent architecture so that it may achieve a cognitively plausible behavior. A number of basic criteria are designed to eendow our agent architecture with the ability to generate choices about social interactions and requests. Particular attention is paid to the criteria for assessing others' willingness to give help, and to the interaction of these criteria with the agent's general attitudes and skills.

Presentations and This and That: Logic in Action

The tie between linguistic entities (e.g., words) and their meanings (e.g., objects in the world) is one that a reasoning agent had better know about and be able to alter when occasion demands. This has a number of important commonsense uses. The formal point, though, is that a new treatment is called for so that rational behavior via a logic can measure up to the constraint that it be able to change usage, employ new words, change mean- ings of old words, and so on. Here w e do not offer a new logic per se; rather w e borrow an existing one (step logic) and apply it to the specific issue of language change.

Categorizing Example Types in Context: Applications for the Generation of Tutorial Descriptions

Different situations may require the presentation of different types of examples. For instance, some sit- uations require the presentation of positive examples only, while others require both positive and nega- tive examples. Furthermore, different examples often have specific presentation requirements: they need to appear in an appropriate sequence, be introduced properly and often require associated prompts. It is important to be able to identify what is needed in which case, and what needs to be done in pre- senting the example. A categorization of examples, along with their associated presentation requirements would help tremendously. This issue is particularly salient in the design of a computational framework for the generation of tutorial descriptions which include examples. Previous work on characterizing exam- ples has approached the issue from the direction of when different types of examples should be provided, rather than what characterizes the different types. In this paper, w e extend previous work on example char- acterization in two ways: (i) we show that the scope of the characterization must be extended to include not just the example, but also the surrounding con- text, and (ii) w e characterize examples in terms of three orthogonal dimensions: the information con- lent, the intended audience, and the knowledge type. We present descriptions from text-books on USP to illustrate our points, and describe h o w such catego- rizations can be effectively used by a computational system to generate descriptions that incorporate ex- amples.

Distributed Representation and Parallel Processing of Recursive Structures

We have developed principles integrating connectionist and symbolic computation by establishing mathematical relationships between two levels of description of a single computational system: at the lower level, the system is formally described in terms of highly distributed patterns of activity over connectionist units, and the dynamics of these units; at the higher level, the same system is formally described by symbolic structures and symbol manipulation. In this pa- per, w e propose a specific treatment of recur- sion where complex symbolic operations on re- cursive structures are mapped to massively par- allel manipulation of distributed representations in a connectionist network.

Rule Learning and the Power Law: A Computational Model and Empirical Results

Using a process model of skill acquisition allo- wed us to examine the microstructure of subjects' performance of a scheduling task. The model, im- plemented in the Soar-architecture, fits many qua- litative (e.g., learning rate) and quantitative (e.g., solution time) effects found in previously collec- ted data. T h e model's predictions were tested with data from a new study where the identical task was given to the model and to 14 subjects. Again a general fitof the model was found with the restrictions that the task is easier for the m o - del than for subjects and its performance impro- ves more quickly. T h e episodic memory chunks it learns while scheduling tasks show h o w acquisition of general rules can be performed without resort to explicit declarative rule generation. T h e model also provides an explanation of the noise typically found when fittinga set of data to a power law — it is the result of chunking over actual knowledge rather than "average" knowledge. Only when the data are averaged (over subjects here) does the smooth power law appear.

Children with Dyslexia Acquire Skill More Slowly

Two studies are reported in which a group of adolescent children with dyslexia and a group of normal children matched for age and IQ undertook extended training. In Study 1, which comprised three phases of learning over an 18 month period, the children learned to navigate via key presses around a fixed circuit of a computer maze. It w a s concluded that, following extended training under these optimal conditions, the children with dyslexia had normal 'strength' of automatisation (as assessed by resistance to unlearning, by ease of releaming, and by dual task performance) but that their 'quality' of automatisation (as assessed by speed and accuracy) was impaired. Study 2 investigated the blending of two compatible simple reaction responses into a two choice reaction. Although performance on the simple reactions was equivalent across groups, the children with dyslexia had m o r e difficulty combining the two skills at firstand showed significantly less learning over the course of the training period. T h e estimated learning rate was around 5 0 % slower for the children with dyslexia, leading to the prediction that the proportionate slowing in acquisition time would increase as the square root of the normal acquisition time. A connectionist frameworic may provide a natural explanation of the phenomena.

Resolutino of Syntactic Ambiguity: The Case of New Subjects

I review evidence for the claim that syntactic ambiguities are resolved on the basis of the meaning of the competing analyses, not their structure. I identify a collection of ambiguities that d o not yet have a meaning-based account and propose one which is based on the interaction of discourse and grammatical function. I provide evidence for m y proposal by examining statistical properties of the Penn Treebank of syntactically annotated text.

Modeling Forced-Choice Associative Recognition Through a Hybrid of Global Recognition and Cued-Recall

Global recognition models usually assume recognition is based on a single number, generjilly interpreted as 'familiarity'. Clark, Hori, and Callan (in press), tested the adequacy of such models for associative recognition, a paradigm in which subjects study pairs and must distinguish them from the same words rearranged into other pairs. Subjects chose a tcirget pair from a set of three choices. In one condition all three choices contained a common, shared word (OLAP); in the other condition, all words were unique (NOLAP). Subjects performed slightly better in the NOLAP condition, but global recognition models predict an P advantage, due to the correlation among test pairs. Clark et al. (in press) suggested that the subjects m a y have used cued-recall to supplement their familiarity judgments: the greater number of imique words in the NOLAP case provides extra retrieval chances that can boost performance. We tested this possibility by implementing a retrieval structure that leads to a hybrid of cued-recall and recognition. W e did this for several current memory models, including connectionist and neural net models. For all of the models we explored , the observed NOLAP advantage was difficult to impossible to produce. While some researchers propose that there is a cued-recall component to associative recognition, our modeling shows that this component cannot be realized easily in the extant memory models as they are currently formulated.

Fcailitation of Recall Through Organization of Theatrical Material

This study explored how professional actors and students differ when asked to segment the same text. Previous research (Noice, 1992, Noice & Noice, in press) has indicated that actors, when preparing a role, divide the script into units called beats.To investigate the role this organizational device plays during learning, actors and students were presented with the s a m e scene from a theatrical script. They were given explicit procedural instructions on how to segment the scene and label their divisions. Actors created far more divisions, resulting in smaller beats and significantly more of those beats described goal-directed activities from the viewpoint of the assigned character. Students, on the other hand, seemed to stand outside the situation and describe the scene as a static state of affairs. The actors' approach to segmenting a script appeared to consist of inferring the causal relations between the events in the play, resulting in better recall of the temporal order. Previous research (Noice, 1993) showed that students w h o studied a theatrical script as if it were a school assignment retained as m u c h material verbatim as actors. However, in the present study in which both groups were given this script division task, actors' verbatim retention w a s significantly higher than that of students'.

Generalizations by Rule Models and Exemplar Models of Category Learning

A rule-plus-exception model of category learning, RULEX (Nosofsky, Palmeri, & McKinley, 1992), and an exemplar-based connectionist model of category learning, A L C O V E (Kxuschke, 1992), were evaluated on their ability to predict the types of generalization patterns exhibited by h u m a n subjects. Although both models were able to predict the average transfer data extremely well, each model had difficulty predicting certain types of generalizations shown by individual subjects. In particular, RULEX accurately predicted the prominence of rule-based generalizations, whereas A L C O V E accurately predicted the prominence of similarity-based generalizations. A hybrid model, incorporating both rules and similarity to exemplars, might best account for category learning. Furthermore, a stochastic learning rule, such as that used in RULEX , might be crucial for captiiring the different types of generalizations patterns exhibited by h u m a n s .

Predictive Encoding: Planning for Opportunities

Suspended goals are those that are postponed by an agent because they do not fit into the agent's current, ongoing agenda of plans. Recognizing later opportunities to achieve suspended goals is an important cognitive ability because it means that one can defer work on a goal until one is in a better position to achieve the goal. This paper focuses on w h e n and h o w such opportunities are recognized in everyday planning situations. According to our account of the phenomenon, suspended goals are associated at the time of encoding with features of the environment in which goal achievement would likely be possible. This process is referred to as predictive encoding. Later, w h e n these features are perceived in the environment through normal inferential processes, the agent is reminded of suspended goals through features previously associated with them, and recognizes the opportunity to achieve the goals. This approach is compared with other recent theories of opportunistic planning, and empirical work is presented which supports predictive encoding as an explanation for opportunistic planning behavior.

LetterGen: Writing from Examples

How do people write letters? Examine the contents of any letter-writing handbook. People gain proficiency in this form of discourse through adaptation of examples (or at least there is a wide consensus that examples are an excellent way to teach good writing skills). LetterGen constructs letters in a similar manner: The programmer initially separates example letters into snippets, and provides a plan derivation for each snippet. During an interview with the user, LetterGen infers which snippets are relevant to the user's stated goals and beliefs by instantiating and adapting the stored derivations. Snippets are then ordered into a new, complete letter. Additionally, representing letters as a set of plan derivations has the consequence that translated versions of a letter do not require special treatment; the target language is treated as just one of many goals.

Knowledge and the Simultaneous Conjoint Measurement of Activity, Agents, and Situations

We outline a measurement theory developed by integrating ideas about knowledge level analysis, production system models of transfer, additive conjoint measurement, and Rasch models of measurement. Productions are assumed to rq)resent situation-action elements of knowledge. The model views the performance of such a knowledge element as the combination of affordance properties associated with the element and ability properties associated with an individual. Under specified conditions, observed behavior can be used to separate and quantify variables measuring situation-action affordances and subject abilities. A specific version of this model is applied to data from four studies involving the CMU Lisp Tutor.

KA: Integrating Natural language Processing and Problem Solving

Traditional Cognitive Science has studied various cognitive components in isolation. Our project at- tempts to alleviate some of the problems with this separation by focusing on the role of problem solv- ing in language comprehension. Specifically, the KA project integrates six areas of current investi- gation in Cognitive Science: knowledge represen- tation, m e m o r y organization, language compre- hension, knowledge acquisition, problem solving, and control architectures. W e are developing a model-based text interpretation and knowledge ac- quisition system which, when completed, will be able to read and interpret descriptions of physical devices, construct models of the devices, and use the acquired models to solve novel design prob- lems. This paper presents three areas in which we use problem solving to constrain natural language understanding: (1) the use of mental models as a foundation for both problem solving and natu- ral language understanding, (2) the use of design experience to influence the understanding process, and (3) the use of the design process to establish the cost of linguistic decisions.

Generalization with Componential Attractors: Word and Nonword Reading in an Attractor Network

Networks that leam to make familiar activity patterns into sta- ble attractors have proven useful in accounting for many aspects of normal and impaired cognition. However, their ability to generalize is questionable, particularly in quasiregular tasks that involve both regularities and exceptions, such as word reading. W e trained an attractor network to pronounce virtually all of a large corpus of monosyllabic words, including both regular and exception words. W h e n tested on the lists of pronounceable nonwords used in several empirical studies, its accuracy was closely comparable to that of human subjects. The network gen- eralizes because the attractors it developed for regular words are componential—they have substructure that reflects common sub- lexical correspondences between orthography and phonology. This componentiality is faciliated by the use of orthographic and phonological representations that make explicit the structured relationship between written and spoken words. Furthermore, the componential attractors for regular words coexist with much less componential attractors for exception words. These results demonstrate that attractors can support effective generalization, challenging "dual-route" assumptions that multiple, independent mechanisms are required for quasiregular tasks.

Eliciting Additional Information during Cooperative Consultations

Analysis of naturally occurring information- seeking dialogues indicates that information providers often query a user when there is insuffi- cient information to formulate a plan that satisfies the user's intentions. In this paper, we present a mechanism that determines when queries are re- quired to elicit additional information from a user and the manner in which these queries should be posed. Query generation is done by taking into account the amount of relevant information in the user's intentions eis recognized by a plan recogni- tion mechanism. The mechanism for query gen- eration described in this paper has been imple- mented as a component of a computerized infor- mation providing system in the travel domain.

Explanatory Coherence in the Construction of Mental Models of Others

A unified model of social perception, integrating causal reasoning and impression formation (Miller & Read, 1991), provides an account of h o w people ar- rive at coherent representations of others and explain their behavicM'. T h e model integrates work on a knowledge structure approach (Schank & Abelson, 1977) with Kintsch's (1988) construction-integration model and Thagard's (1989) model of explanatory coherence. W e explore two issues in social percep- tion. First, w e show h o w the model can be used to explain trait inferences, where traits are treated as frames, composed of goals, plans, resources and beliefs. Second, w e examine h o w people might com- bine inconsistent traits to arrive at a coherent model of another, an example of conceptual combination.

Explorations in the Parameter Space of a Model Fit to Individual Subjects' Strategies while Learning from Instructions

In earlier work, we presented results from an empirical study that examined subjects' learning and browsing strategies as they explained instructional materials to themselves that were contained in a hypertext-based in- structional environment. W e developed a Soar model that, through parameter manipulation, simulated the strategies of each individual subject in the study. In this paper, w e explore the parameters of these simula- tions and contribute several new results. First, we show that a relatively small proportion of strategies captured a large percentage of subjects' interaction behaviors, suggesting that subjects' approach to the learning task shared some underlying strategic commonalities. Sec- ond, w e show that lower performing subjects employed a high proportion of working memory intensive strate- gies, which m a y have partially accounted for their in- ferior performance. Third, clusters of subjects identi- fied through parameters analyses continued to exhibit similar behaviors during subsequent problem solving, suggesting that the clusters corresponded to genuine strategy classes. Furthermore, these clusters appeju-ed to represent general learning and browsing strategies that were, in some sense, adaptive to the task.

Distribuational Information and the Acquisition of Linguistic Categories: A Statistical Approach

Distributional information, in the form of simple, lo- cally computed statistics of an input corpus, provi- des a potential m e a n s of establishing initial syntac- tic categories (noun, verb, etc.). Finch and Chater (1991, 1992) clustered words hierarchically, accor- ding to the distribution of locad contexts in which they appeared in large, written English corpora, obtaining clusters that corresponded well with the standard syntactic categories. Here, a stronger de- monstration of their method is provided, using 'real' data, that to which children are exposed during ca- tegory acquisition, taken from the childes corpus. For 2-5 million words of aulult speech, clustering on syntsu:tic and semantic bases was observed, with a high degree of cleai differentiation between syntac- tic categories. For child data, s o m e noun and verb clusters emerged, with s o m e evidence of other ca- tegories, but the data set was too small for reliable trends to emerge. S o m e initial results investigating the possibility of classifying novel words using only the immediate context of a single instance are also presented. These results demonstrate that statisti- cal information m a y play an important role in the processes of early language 2u:quisition.

Boundary Effects in the Linguistic Representations of Simple Recurrent Networks

This paper describes a number of simulations which show that SRN representations exhibit interactions between m e m o r y and sentence and clause boundaries reminiscent of effects described in the early psycholinguistic literature (Jarvella, 1971; Caplan, 1972). Moreover, these effects can be accounted for by the intrinsic properties of srn representations without the need to invoke external memory mechanisms, as has conventionally been done.

A Connectionist Attentional Shift Model of Eye Movement Control in Reading

A connectionist attentional-shift model of eye- m o v e m e n t control (CASMEC) in reading is described. T h e model provides an integrated account of a range of saccadic control effects found in reading, such as word-skipping, refixation, and of course normal saccadic progression.

Jumpnet: A Multiple-Memory Connectionist Architecture

A jumpnet includes two memory storage systems: a processing network that employs superimpositional storage and a control network that recodes input patterns into minimally overlapping hidden patterns. By creating temporary, input-specific changes in the weights of the processing network, the control network causes the processing network to "jvimp" to the region of its weight space that is most appropriate for a particular input pattern. Simulation results demonstrate that jumpnets exhibit only moderate levels of interference while retaining the computational advantages of superimpositional memory.

A Model of Visual Perception and Recognition Based on Separated Representation of "What" and "Where" Object Features

In the processes of visual perception and recognition h u m a n eyes actively select essential information by way of successive fixations at the most informative points of the image. So, perception and recognition are not only results or neural computations, but are also behavioral processes. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the m o d e m view of the problem, invariant object recognition is provided by the foUowing: (i) separated processing of "what" (object features) and "where" (spatial features) information at high levels of the visual system; (ii) mechanisms of visual attention using "where" information; (iii)representation of "what" information in an object-based frame of reference (OFR). However, most recent models of vision based on O F R have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, w e use not O F R , but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high-level subsystem consisting of "what" (Sensory M e m o r y ) and "where" (Motor M e m o r y ) modules. The resolution of primary features extraction decreases with distances from the point of fixation. F F R provides both the invariant representation of object features in Sensory M e m o r y and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale

Promoting Conceptual Cahgne in Children Learning Mechanics

We will describe an attempt to improve children's understanding of some basic concepts in mechanics starting firoman examination o[ their ideas of motion. Children's personal experience of the world colours their beliefs and explanations in science. A computer augmented curriculum w a s designed to promote conceptual change in the classroom. Twenty-nine twelve- and thirteen-year-olds, and their usual classroom teacher tried it out. T h e computer programs consisted of interactive simulations which allowed direct engagement with animations of real world scenarios in which pupils have control over forces and objects. W e demonstrated that this curriculum produced evidence of conceptual change. O u r findings have implications for the development of a more sophisticated view of conceptual change.

The Acquisition of a Procedure Schema from Text and Experiences

Learning from problem solving episodes has previously been modeled in two different ways: Case- based planners ( H a m m o n d , 1989) acquire additional knowledge by storing n e w cases (i.e. the specific plans for different problems); search-based systems like S O A R or P R O D I G Y learn by chunking the result of a search process (Rosenbloom eL al., 1991; Minton et. al., 1989) and by forming macro- operators (Korf, 1985). This paper proposes comprehension-based learning as a third possibility: From specific problem solving experiences (cases) and a related problem description (text) some coarse- grained abstract representation is constructed, that m a y initially be inconsistent and redundant. B y wholistic integration processes a coherent and consistent procedure schema is subsequently formed. Such a procedure schema can be reused for obtaining solutions to jH-oblems which are quite different at the concrete level, but have been comprehended to share abstract commonalties. T h e acquisition of such a procedure schema is exemplified for various solutions to different Tower of Hanoi problems (3-, 4-, and 5- disks). T h e utilization of these schemata is then discussed for the 4-disk problem and respective experimental data are reported.

Recency and Context: An Environmental Analysis of Memory

Central to the rational analysis of m e m o r y is the proposal that memory's sensitivity to statistical structure in the environment enables it to optimally estimate the odds that a m e m o r y trace will be needed n o w (Anderson, 1990). These odds are based on (1) the pattern of prior use of the m e m o r y (e.g., h o w recently it has been needed) and (2) the similarity of the current context to the previous contexts in which it has been needed. W e have analyzed three sources of informational demand in the environment: (1) speech to children; (2) word usage in front page headlines; and (3) the daily distribution of authors of electronic mail. W e found that the factors that govern m e m o r y performance, including recency, also predict the odds that an item (e.g., a word or author) will be encountered now. Here w e tested a basic prediction the theory makes about the independence between context and recency in the environment by extending our previous analysis of recency in the N e w York Times. Though the results of four behavioral experiment were inconsistent with this independence assumption, the combination of the rational and environmental analyses were able to account for 9 4 % of the variance in these experiments.

Using Context to Interpret Indifrect Requests in a Connectionist Model of NLU

The role of context in natural language under- standing ( N L U ) is generally accepted as being im- portant to the tcisk of ascertaining "correct mean- ing." This is particularly true during interpreta- tion of otherwise ambiguous language constructs, such as lexical ambiguity resolution, metaphor un- derstanding, and indirect speech act interpreta- tion. This paper presents a feedforward connec- tionist model called SAIL2 which utilizes the con- text obtained from the processing of previously seen text to help resolve the ambiguity inherent in indirect speech acts, specifically, in indirect re- quests.

Self vs. Other-Generated Hypotheses in Scientific Discovery

Other-generated hypotheses are often considered easier to test than self-generated hyptitheses. T o determine the precise effects of other-generated hypotheses, w e propose three kinds of effects and describe a study designed to test for these effects of hypothesis source. T h e three kinds of effects considered are: (i) hypothesis plausibility changes, (ii) skepticism changes, and (iii) process changes. Forty-two undergraduate subjects were given a microworld discovery task called Milktruck. Subjects either had to generate their o w n initial hypothesis or were given the most frequently generated hypothesis. It w a s found that the other- generated hypothesis lead to m o r e thorough investigation of hypotheses resulting in a decrease in false terminations with incorrect solutions. T h e results suggested these effects were caused by an increjise in skepticism rather than changes in hypothesis plausibility or process changes.

Thinking Locally to Act Globally: A Novel Approach to Reinforcement Learning

Reinforcement Learning methods address the prob- lem faced by an agent w h o must choose actions in an u n k n o w n environment so as to maximize the re- wards it receives in return. T o date, the available techniques have relied on temporal discounting, a problematic practice of valuing immediate rewards more heavily than future rewards, or else have im- posed strong restrictions on the environment. This paper sketches a n e w method which utilizes a subjec- tive evaluator of performance in order to (1) choose actions that maximize undiscounted rewards and (2) do so at a computational advantage with respect to previous discounted techniques. W e present initial experimental results that attest to a substantial im- provement in performance.

Assessing Conceptual Understanding of Arithmmetic Stucture and Language

We contend that the primary role of an illustration or physical manipulable for teaching mathematics is to help the learner understand the language of the mathematics by providing the learner with a referential senr«ntics. Having taught this to subjects, w e address the question of how to assess their understanding. Problem-solving performance, w e show, is insufficient by itself. A n assessment of students' m e m o r y for the original problem statement, and their ability to use cues within the referential semantics is demonstrated as a potential method. Fourth graders (n=24) solved word algebra problem after (a) training with a designed referential semantics from a computer tutor called the Planner, (b) training with symbolic manipulatives, or (c) receiving no training (control). Although pretest-posttest gains were only moderately better for the Planner group than the symbol group, the former showed reliably better ability to reconstruct the problem statements after a 5-day delay. A particular advantage for recall of algebraic relations (as compared to assignments) was evident. Mental representation of relations has been singled out as a major obstacle to successful word problem solving. The support that a well-designed referential semantics plays in the formation and retrieval of appropriate mental structures for problem solving are discussed, as are methods for assessing problem comprehension and conceptual change.

The Ontogeny of Transformable Part Representations in Object Concepts

Many theories of object categorization assume that objects are represented in terms of components-attributes such as parts, functions and perceptual properties. The origin of these components has been neglected in concept learning theories. W e provide further evidence for a theory of part ontogeny in which parts are not simply given by perceptual information but develop over the course of category learning. T w o experiments are reported. T h e experiments showed that subjects could identify a component of u n k n o w n stimuli as a part in spite of variation in its shape across exemplars of the category. However, the experiments also revealed perceptual constraints on what variations could be identified as the same part.

Content in Computation

Examining the philosophical foundations of theories in computational psychology, and cognitive science in general, is a methodology that is likely to yield strong results to problems in the philosophy of mind. O n e such problem is the problem of intentionality. A n intentional property is semantic: it has parts which refer or are true. T h e problem is to explain w h y these properties are empirically, and hence causally, respectable. A s in all special, "non-basic" sciences, an empirically respectable property has sufficient conditions for its instantiation. But specifying such conditions for the intentional properties used by computational psychology proves difficult, since apparently neither physical nor computational identity are enough. A solution is proposed by examining in some detail the computational theory of vision. A key element of this theory requires that the intentional properties attributed to representations are constrained by considering the later computational uses to which these representations must be put. This constraint is strong enough to yield sufficient conditions for a given representation to have a given intentional property. Since analogous constraints are likely to be found in other cognitive d o m a i n s , the result argued for constitutes an important methodological a n d philosophical insight about cognitive science in general.

A New Approach to the Study of Subitizing as Distinct Enumeration Processing

This paper presents a new methodology for examining the phenomenon of subitizing. Subjects were presented with a standard numerosity-detection task but for a range of presentation times to allow Task-Accuracy Functions to be computed for individual subjects. The data appear to show a continuous change in processing for numerosities from 2 to 5 when the data are aggregated across subjects. At the level of individual subjects, there appear to be qualitative shifts in enumeration processing after 3 or 4 objects. The approach used in Uiis experiment m a y be used to test the claim that subitizing is a distinct enumeration process that can be used for small numbers of objects.

Learning Science with a Child-Focused Resource: A Case Study of Kids as Global Scientists

This project investigates student learning within an innovative model of classroom learning: student-generated and maintained nodes of "expertise" along the Internet superhighway. Using Internet access as the backbone for classroom activities in the environmental and atmospheric sciences, the Kids as Global Scientists (KGS) project is contributing insights into: 1) h o w the technology can be used to promote middle school students' construction of knowledge, 2) the nature of distributed expertise across child- developed and focused Internet nodes, and 3) the design of K-12 appropriate Internet interfaces. In particular, the K G S project recognizes that current Internet resources are not focused with a K-12 audience in mind. Therefore, a shift in focus from adult to child-focused Internet nodes w a s established. In addition, the development of communities of learners which support the exchange of information between diverse and geographically distinct learners is investigated. Results indicate that becoming student experts in particular areas of science that other students value, and being responsible resources for other students' learning increases the "use value" of students' knowledge and encourages the learning of real science from first hand sources.

Concept Hierarchy Networks for Inheritance Systems: Concept Formations, Property Inhertitance and Conflict Resolution

Most inheritance systems which use hierarchi- cal representation of knowledge, do not consider learning. In this paper, a concept hierarchy net- work model based on adaptive resonance theory is proposed for inheritance systems, which ex- plicitly includes learning as one of its major de- sign goals. B y chunking relations between con- cepts as cognitive codes, concept hierarchy can be learned/modified through experience. Fur- thermore, fuzzy relations between concepts can be represented by weights on links connecting them. It is shown that by a spreading activation process based on code firing, and competition be- tween conflicting concepts, the model is able to exhibit property inheritance and to resolve such conflicting situations as exceptions and conflicting multiple inheritance.

An Inhibitory Mechanism for Goal-Directed Analogical Mapping

Theories of analogical thinking have differed in the roles they ascribe to processing goals as a source of constraint on analogical mappings. W e report an experiment that examines the impact of processing goals on subjects' mappings in (a) a task involving generation of plot extensions for soap opera scripts, and (b) an explicit m o p i n g task based on characters in the scripts. The scripts were written so that the mappings for central characters were four-ways ambiguous. Manipulations of subjects' processing goals influenced their preferred m£q)pings, both in the plot-extension and mapping tasks. In the latter task, goal-irrelevant information contributed to the resolution of m ^ p i n g s that were ambiguous on the basis of goal-relevant information alone. T h e qualitative pattern of results was successfully simulated using A C M E , a constraint-satisfaction model of mapping, in which processing goals are assumed to control an inhibitory process of selective attention. Processing goals attenuate the activation level of goal-irrelevant information, reducing or even eliminating its impact on mapping decisions.

Simulating Tilt Illusions with Lateral Inhibition and a Virtual Axis

When a vertical test stimulus is presented simultaneously within a surrounding stimulus of orientation 10-30° clockwise from vertical, the test stimulus appears slightly counter-clockwise from vertical. In contrast, when the surrounding stimulus is 60-80° clockwise from vertical, the test stimulus appears slightly clockwise from vertical. Lateral inhibition between orientation-selective neurons can account for the former effect (repulsion), but not for the latter effect (attraction). H o w e v e r , if an orthogonal "virtual axis" is also present and exerts its o w n lateral inhibition, both effects can be accounted for. A mathematical m o d e l demonstrates quantitatively h o w this m a y occur in the visual system. O n e simulation with narrowly tuned orientation-selectivity functions produced tilt illusions of similar magnitude to that observed with h u m a n s at normal presentation durations. A simulation with m o r e broadly tuned functions produced tilt illusions of m u c h greater magnitude, as are found with h u m a n s at very short presentation durations. Based on the model's performance, h u m a n performance and neurophysiological data, it is suggested that: 1) lateral inhibition m a y be the immediate cause of both direct and indirect tilt illusions, and 2) the "virtual axis" m a y be a real neural mechanism and m a y be found in greater proportion in extrastriate cortex than in striate cortex.

Where Does Systematicity Come From

H u m a n language and m e m o r y are only quasi-system- atic. They are composed of context free (systematic) niappings, context sensitive mappings, and idiosyncrasies. Consequently, generalizations to novel stimuli m a y be systematic if ihey result from the context free mappings or m a y Ixxome "regularized" toward k n o w n stimuli if they result from the context sensitive mappings. T w o factors that affect the degree of systematicity are the su-uc- ture of the training corpus and the amount of atten- tion or vigilance paid to the task. M o r e systematic uaining corpora and more attention produce more systematic responses and fewer specific context sen- sitive regularizations. A simple P D P model is used to demonstrate these phenomena. A 3-layer feedfor- ward network learns an auto-associative mapping. Untrained stimuli are tested to see if the model will respond with the systematic generalization or with a specific regularization by activating the out- put pattern for the nearest trained neighbor.

Supporting Situated Interpretation

This paper discusses the role of interpretation in innovative design and proposes an approach to provid- ing computer support for interpretation in design. According to situated cognition theory, most of a designer's knowledge is normally tacit. Situated interpretation is the process of explicating something that is tacitly understood, within its larger context. The centrality of interpretation to non-routine design is demonstrated by: a review of the design methodology of Alexander, Rittel, and Schon; a protocol analysis of a lunar habitat design session; and a summary of Heidegger's philosophy of interpretation. These show that the designer's articulation of tacit knowledge takes place on the basis of an understanding of the design situation, a focus firom a particular perspective, and a shared language. A s knowledge is m a d e explicit through the interpretive processes of design it can be captured for use in computer-based design support systems. A prototype software system is described for representing design situations, interpretive perspectives, and domain terminology to support interpretation by designers.

A Theory of Skilled Memory

A theory of mnemonic expertise is outlined along with findings from initial tests. The expertise belongs to a nonnal adult ( D D ) w h o developed a digit-span of 104 through extended practice. The theory describes h o w mechanisms consistent with the principles of skilled m e m o r y (Chase & Ericsson, 1982; Ericsson & Staszewski, 1989) and identified by analyses of DD's behavior, support his skill. Implemented as a computational model, the theory assumes that distinct knowledge structures mediate both DD's encoding of short segments of trial lists as elaborate, well- structured L T M representations and their retrieval in several recall tasks. Current testing investigates the model's ability to generate contextual codes, a class of patterned m e m o r y elaborations experimentally shown to improve DD's serial recall (Staszewski, 1990). Given the same lists D D received, it successfully generates over 8 0 % of the contextual codes in his verbal reports. Because successful simulation of contextual codes entails accurate simulation of operations performed by first-order coding mechanisms, results support theoretical assumptions about the knowledge underlying DD's coding operations. The model's overly powerful coding suggests that more stringent architectural constraint must be incorporated to rigorouslydemonstrate h o w skilled m e m o r y can increase working m e m o r y capacity in a normal cognitive architecture and support expertise.

Strategies in Pronoun Comprehension

The aim of this study was to distinguish bet- ween three heuristic strategies proposed to ac- count for the assignment of ambiguous prono- uns: a subject assignment strategy, a paral- lel function strategy and a pwallel order-of- mention strategy. According to the subject assignment strategy a pronoun is assigned to a preceding subject noun phrase. A parallel function strategy predicts that a pronoun will be assigned to a noun phrase with a peirallel grzunmatical function whereas a parallel order strategy predicts that a pronoun will be a as- signed to a noun phrase in a parallel position in a previous clause. These strategies were te- sted by examining the interpretation of ambi- guous subject and non-subject pronouns. T h e results showed a bias to assign a pronoun to a preceding subject, suggesting the operation of a subject assignment strategy. However, this bias was reversed for non-subject pronouns. These pronouns showed a bias to preceding non-subjects with parallel grammatical roles, thus supporting a parallel function hypothe- sis. Finally, the subject assignment bias was reduced w h e n a non-subject pronoun had a dif- ferent grammaticaJ role from the non-subject antecedent, thus supporting a parallel order-of- mention strategy. W e conclude that all three strategies m a y constrain the assignment of am- biguous pronouns.

Establishing Long-Distance Dependencies in a Hypbrid Network Model of Human Parsing

This paper presents CAPERS, a hybrid spreading ac- tivation/marker pjissing architecture for parsing, whose self-processing network directly represents a parse tree. C A P E R S establishes syntactic dependencies through the purely local communication of simple syntactic features within the network. The structural constraints on two nodes in a long-distance syntactic relation are broken down into locsil components, each of which can be ver- ified entirely between pairs of adjacent nodes along the feature passing path between the two dependent nodes. This method of establishing long-distance syntactic rela- tions, in conjunction with the competitive dynamics of the network, accounts for psycholinguistic experimental data onfiller/gap constructions.

Sequencing Explanations to Enhance Communicative Functionality

The extent to which the segments of an expla- nation succeed in carrying out their intended func- tion depends in p

A Connectionist Model of Speech Act Prediction

We developed a connectionist architecture that accounts for the systematicity in the sequentialOTderingof speech act categories. That is, to what extent can the category of speech act n+1 be successfully predicted given speech acts 1 through n? Three connectionist architectures were contrasted: Elman's recurrent network, a single- entry backpropagation network, and a double-entry backpropagation network. T h e recurrent networkfit the speech act sequences in naturalistic conversation better than the backpropagation networks. M o s t of the systematicity w a s captured by the network's use of 2 to 3 prior speech acts of context.

Simulation of Cued-recall and Recognition of Expository Texts by using the Construction-Integration Model

The purpose of this paper is to compare the results of adults' performances in cued-recall and recognition of expository texts with the results of simulations derived from the construction- integration model proposed by Kintsch (1988. 1990). In the cued recall task, we manipulated three parameters: the weights of the connexions in the net, the size of the short-term memory buffer and the representations of the sentence used as a cue. The main results show that w e need to simulate the macroprocessing and the priw knowlegde of the learners to be able to increase the simulations. In the recognition task, w e simulate the representation of 4 levels (surface syntactic variation, close semantic variation, inference and distant semantic variationreferring to the same situation model than the text) using different connexion weights function the decay of the memory trace. The main results show the necessity to take into account these levels to explain the subjects' cognitive processes involved in a comprehension/memcM-ization task. For both experiments, the activation values obtained cOTTectly fitthe hierarchy of the experimental data.

Teaching Science with a Child-Focused Internet Resource: What Do Teachers Need to know, Where Do They Learn It, and How Does It Change Their Teaching

This research addresses the learning done by fifth and sixth grade teachers as they facilitate an innovative three-week unit on weather, incorporating c o m p u t e r communications for information research and inter-classroom information exchange between distant student groups. The teachers m a y be situated in a community of practice where they learn about computer use in regular classrooms while teaching. Little research has been done on teachers' learning in these types of settings and no communities of practice linked by computer networks have been reported in the education research literature. Teaching practices and learning were studied. Both quasi-experimental and qualitative methods were employed. To discern the differences in teaching practices, comparisons were m a d e of videotapes taken in classrooms with computers and others without. Teacher journals, questionnaires, and telephone interviews were employed to understand problems faced by the teachers, their contacts with experts, the problem resolution, and changes to their teaching practices due to both their learning experiences and to the innovations of the curriculum. With a better understanding of the teachers' learning, the research team will revise future field tests, anticipating teachers' needs w h e n they incorporate computer telecommunication technology in innovative science units.

Attenuation of Belief Perserverance In a Covariation Judgement Task

A wide variety of judgment tasks have shown that once a reasoner favors a hypothesis, encountering evidence which contradicts it might not, in and of itself, dislodge that hypothesis. T h e interaction of prior belief and n e w evidence w a s studied in a covariation judgment task where subjects monitored multiple predictor-outcome relationships. Each relationship w a s programmed to reflect a strong positive contingency in a first phase, but in the second phase the contingency w a s negative, disconfirming the acquired expectation. For two of these relationships, the negative evidence w a s framed as positive evidence for alternative relationships, while in a third relationship, the negative evidence w a s not presented as supporting alternative explanations. Subjective contingency estimates indicated that the negative contingency w a s recognized in all three conditions. Belief perseverance, as measured by the likelihood of predicting the outcome on trials where the original predictor variable w a s present, w a s the strongest in the condition v^thout alternatives. These results support the notion that belief change is a function of the negative evidence pertaining to that belief and the presence of alternative explanations which seek their support from that same evidence.

The Contributinos of Studying Examples and Solving Problems to Skill Acquisition

There is little doubt that examples play a major role in acquiring a n e w skill. H o w examples improve learning, however, is subject to some debate. Re- cently, two different classes of theories have been pro- posed to explain w h y examples are such an effective manner of learning. Example Generalization m o d - els suggest that problem solving rules are acquired while studying examples. Knowledge Compilation models, on the other hand, suggest that examples are useful because they guide future problem solv- ing, where the necessary rules are created. Consis- tent with knowledge compilation models, w e found that separating target problems from source exam- ples hindered learning because the source exeimples could not be remembered to guide problem solving. W e also found that if sources are not accessible or re- m e m b e r e d during problem solving, learning occurs best w h e n the sources are problems to be solved, rather than examples. Taken together, these results provide strong support for the knowledge compilation view: in order for an example to be most effective, the knowledge gained from the example must be ap- plied to solving a new problem.

Generating Effective Instructions

This paper discusses a corpus-based approach to the generation of effective instructions. The approach advocated employs a detjiiled linguistic study of a corpus of a broad range of instructional texts to de- termine both the range of grammatical forms used in instructional text and the contexts in which they are vised. The forms that are consistently used by technical writers are taken to be the most effective. The results of this study are implemented in an auto- mated text generation system for instructional text. The primary focus of this study has been the use of rhetorical relations to effectively code actions and their procedural relationships in instructional text, but the approach can generally be applied to differ- ent linguistic issues and text genres.

What Mediates the Self-explanation Effect? Knowledge Gaps, Schemas or Analogies?

Several studies have found that learning is more effective when students explain examples to themselves. Although these studies show that learning and self-explanation co-occur, they do not reveal why. Three explanations have been proposed and computational models have been built for each. T h e gap-filling explanation is that self-explanation causes subjects to detect and hll gaps in their domain knowledge. T h e schema formation explanation is that self-explanation causes the learner to abstract general solution procedures and associate each with a general description of the problems it appUes to. T h e analogical enhancement explanation is that self- explanation cause a richer elaboration of the ex- ample, which facilitates later use of the example for analogical problem solving. W e claim that, in one study at least, gapfilling accounts for most of the self-explanation effect.

Object Knowledge Influences Visual Image Segmentation

Visual image segmentation is the process by which the visual system groups locations that are part of the same object. Can knowledge of objects influence image segmentation, or is the segmentation process isolated from object information? The use of object knowledge at this stage of vision might seem premature, as the goal of segmentation is to provide input to object recognition. However, purely bottom-up image segmentation has proven a computationally difficult task, suggesting that a "knowledge- based" approach might be required. W e addressed this issue using two segmentation tasks: Subjects either determined whether a small 'x' was located inside or outside the region subtended by a block shape, or they determined whether two small x's were on the same shape or different shapes. The familiarity of the shapes was manipulated, and subjects were fastest to segment the visually familiar shapes. These results suggest that image segmentation can be partly guided by information about familiar objects, consistent with knowledge-based image segmentation models.

Levels of Competition in Lexical Access

For a visual word to be recognised it must be singled out from a m o n g all other possible candidates. T h e less distinct a lexical entry is the more candidates there will be competing with it, and so recognition will be inhibited. In opposition to this view the fin- dings of Andrews (1989,1992) show a facilitatory ef- fect of neighborhood size; low frequency words which bore orthographic similarity to m a n y other words were recognised more quickly, than those with fewer neighbors. Since neighborhood size as determined by Coltheaurts " N " metric was designed as essentially a measure of lexical similarity, Andrews result could be interpreted as evidence for lexical level facilitation. In the present experiments w e repeat both the Idt and naming studies of Andrews using a more tightly controlled stimulus set. Only in L D T are her results supported, in naming w e find no facilitatory effect of neighborhood size. W e discuss w h y any truly lexical level facilitation is inherently improbable.

Constraints on Knowledge Acquisition: Evidence from Children's Models of the Earth and the Day/Night Cycle

First, third, and fifth grade children were asked questions about the shape of the earth and about the day/night cycle. The majority of the children used a small number of well-defined mental models of the earth, the sun, and the m o o n to explain the day/night cycle. T h e younger children formed initial mental models which explained the day/night cycle in terms of everyday experience (e.g., the sun goes d o w n behind the mountains; clouds cover up the sun). T h e older children constructed synthetic mental models (e.g., the sun and m o o n revolve around the stationary earth every 24 hours; the earth rotates in an "up/down" direction with the sun and m o o n fixed at opposite sides) which are attempts to synthesize aspects of the scientific view with aspects of their initial models. A few of the older children appeared to have constmcted a mental model of the day/night cycle similar to the scientific one. The children's models of the shape of the earth provided strong "second-order" constraints on their models of the day/night cycle (e.g., children with flat earth models do not explain the day/night cycle in terms of the m o v e m e n t of the earth). The changes in the children's models with age was explained in terms of the gradual reinterpretation of a set of presuppositions, s o m e of which are present early in the child's life, and others which emerge later out of previously acquired knowledge.

Modeling Globabl Synchrony in the Visual Cortex by Locally Coupled Neural Oscillators

A fundamental aspect of perception is to bind spatially separate sensory features, essential for object identification, segmentation of different objects, and figure/ground segregation. Theoretical considerations and neurophysiological findingspoint to the temporal correlation of feature detectors as a binding mechanism. In particular, it has been demonstrated that the cat visual cortex exhibits 40-60 H z stimulus-dependent oscillations, and synchronization exists in spatially remote columns (up to 7 m m ) which reflects global stimulus properties (Gray et al., 1989; Eckhom et al., 1988). What neural mechanisms underlie this global synchrony? Many neural models thus proposed end up relying on global connections, leading to the question of whether lateral connections alone can jwoduce remote synchronization. With a formulation diffwent from the frequently used phase model, w e find that locally coupled neural oscillators can indeed yield global synchrony. The model employs a previously suggested mechanism that the efficacy of the connections is allowed to change on a fast time scale. Based on the known connectivity of the visual cortex, the model outputs closely resemble the experimental findings. This model lays a computational foundation for Gestalt perceptual grouping.

A Computational Model of Human Emotions

AI and Cognitive Science have largely Ignored the modeling of emotions and their Influence on cognition. Yet clinical psychologists suggest that emotions are of all factors the most Important for driving people's motivations, that Is, In establishing goals and intentions. Emotions and emotional reactions are instrumental in understanding what problems people solve. This paper describes an implementation of a model of h u m a n emotions. The system we have built is a considerable extension of the model described by [Ortony, Clore. and Collins. 19881. The system consists of emotion detectors for almost 3 0 emotions related to events, agents and objects; emotional intensities are also computed. A n extensive simulation has been constructed to demonstrate the operation of the system.

Linear Separability as a Constraint on Information Integration

In this paper we examined the extent to which linear separability constrained learning and categorization in different content domains. Linear separability has been a focus of research in many different areas such as categorization, connectionist modeling, machine learning, and social cognition. In relation to categorization, linearly separable (LS) categories are categories that can be perfectly partitioned on the basis of a weighted, additive combination of component information. W e examined the importance of linear separability in object and social domains. Across seven exp)eriments that used a wide variety of stimulus materials and classification tasks, LS structures were found to be more compatible with social than object materials. Nonlincarly separable structures, however, were more compatible with object than social materials. This interaction between linear separability and content domain was attributed to differences in the types of knowledge and integration strategies that were activated. It was concluded that the structure of knowledge varies with domain, and consequently it will be difficult to formulate domain general constraints in terms of abstract structural properties such as linear separability.

Reductionistic Conceptual Models and the Acquisition of Electrical Expertise

Our objective has been to determine whether woii^ing with reductionistic models reduces students' mis- conceptions, and increases the coherence and flexibility of their expertise as they solve problems and generate expla- nations. W e conducted experimental trials of an interac- tive learning environment that provides models of circuit behavior. In these trials, w e examined students' perfor- mance on a variety of circuit problems before and after they worked with either (a) a "transport" model alone, or (b) the transport model augmented with explanations of its processes in terms of a "particle" model. T h e posttest re- sults reveal that, while both groups performed well o n a wide range of tasks, students w h o received the particle- model explanations achieved higher levels of performance on tasks that require an understanding of voltage and its distribution. W e conjecture that this is due to the particle model providing students with a mechanistic model for charge distribution that is consistent with the behavior of the transport model and that inhibits the construction and use of certain common misconceptions.

Representation of Variables and Their Values in Neural Networks

Neural nets (NNs) such as multi-layer feedforward and recurrent nets have had considerable success in creating representations in the hidden layers. In a combinatorial domain, such as a visual scene, a parsimonious represent- ation might be in terms of component features (or variables) such as colour, shape and size (each of which can take on multiple values, such as red or green, or square or circle). Simulations are described demonstrating that a multi-variable encoder network can learn to represent an input pattern in terms of its component variables, wherein each variable is encoded by a pair of hidden units. The interesting aspect of this representation is that the number of hidden units required to represent arbitrary numbers of variables and values is linear in the number of variables, but constant with respect to the number of values for each variable. This result provides a new perspective for assessing the representational capacity of hidden units in combinatorial domains.

The Role of Structural Alignment in Conceptual Combination

Many researchers have suggested that understanding novel noun phrases involves a process of conceptual combination in which people determine how two or more conceptsfittogether to form a new concept. One important way that people combine concepts is by property mapping, which involves asserting that a property of one concq)t is true of the other concept as in, "box that is striped" for "skunk box." A n experiment investigated the hypothesis that property mapping occurs by structural alignment in which mental representations are aligned or put into correspondence. T h e result of this process is primarily a set of matching elements (called commonalities) and a set of mismatching elements related to the commonalities (called alignable differences). The experiment compared property mapping definitions to the alignable differences listed by subjects in a comparison task which is known to involved structural alignment. Consistent with the hypothesis, there was a strong correspondence between property mapping definitions and alignable differences compared to another strategy in conceptual combination not thought to involve structural alignment (slot filling).

Infants' Expectations about the Motion of Animate versus Inanimate Objects

This study explores the ways in which infants reason about h u m a n action. Although recent research supports the view that young infants' reasoning about object physics is guided by a set of core principles. there is little evidence for early principles of this sort in infants' reasoning about human action. T o explore this issue, a habituation study was done comparing 7- month-olds' reasoning about simple causal sequences involving people to their reasoning about those involving inanimate objects. Our findingssuggest that although 7-month-olds expect that the motion of inanimate objects will be constrained by the principle of contact (an object affects the motion of another object if and only if the two objects come into contact), they do not expect human motion to be constrained in this way. These findings provide preliminary evidence diat infants have principled expectations to guide their reasoning and learning about human action.

Causal Mechanisms as Temporal Bridges in a Connectionist Model of Causal Attribuion

We use a connectionist model which relies on the encoding of lempnjral relationships a m o n g events to investigate the role of causal mechanisms in causal attribution. Mechanisms are encoded as intervening events with temporal extent that occur between the offset of a causal event and the onset of an effect. In one set of simulations, the presence of intervening events facilitated acquisition of a relationship between cause and effect via the mechanism. In a second set of simulations, prior experience with mechanisms enhanced development of a cause-effect relationship during later training absent the mechanism. The results provide evidence that causal mechanisms can facilitate causal attribution via H u m e a n cues-to-causality.

A Cognitive Taxonomy of Numeration Systems

In this paper, we study the representational properties of numeration systems. W e argue that numeration systems are distributed representa- tions—representations that are distributed across the internal mind and the external envi- ronment. W e analyze number representations at four levels: dimensionality, dimensional repre- sentations, bases, and symbol representations. The representational properties at these four levels determine the representational efficien- cies of numeration systems and the performance of numeric tasks. From this hierarchical struc- ture, we derive a cognitive taxonomy that can classify most numeration systems.