Kybernetes vol.34, nos. 1/2, 2005, pp. 89-104. Penultimate version.
The mainstream conception of memory as an encoding–storage–retrieval device is criticized for not being able to account for various phenomena such as false recognition, intrusion, and confabulation. Based on Heinz von Foerster’s insight that cognitive functions should not be treated as separate units, I present an alternative constructivist perspective that does not treat memory as a separate module. Rather it emphasizes the inseparable nature of the cognition–memory compound. Finally, I outline the importance of this insight for radical constructivism.
Key words. Anthropomorphization; change blindness; childhood amnesia; false memory; language; radical constructivism; trivialization.
The difference between false memories and true ones is the same as for jewels:
it is always the false ones that look the most real, the most brilliant
Salvador Dali, The Secret Life of Salvador Dali, 1942
Memory is still a terra incognita. Psychology and cognitive science have investigated memory ever since they started as scientific disciplines. They have dichotomized it along various dimensions such as: primary and secondary memory (James 1898), sensory, short-term and long-term memory (Atkinson & Shiffrin 1968), procedural and declarative memory (e.g., Squire 1987), implicit and explicit memory, episodic and semantic memory (Tulving 1972), etc. All these approaches are based on the mainstream definition of memory: “The term memory implies the capacity to encode, store, and retrieve information” (Baddeley 2001, p. 514), or as Heinz von Foerster critically put it, one expects “a certain invariance of quality of that which is stored at one time and then retrieved at a later time” (Foerster 1969, p. 102). However, this common understanding of memory as an encoding-storage-retrieval device places us next to computers: Like their memory, we too are expected (from others as well as from ourselves) to remember data and events accurately, whether at school, in quiz shows, or in the supermarket. Forgetting data and twisting memories are interpreted as failure and distortion.
This prevailing perspective considers phenomena like false memories as aberrations that can and should be minimized (e.g., Dodson, Koutstaal, and Schacter 2000). The perhaps most sensational aspect of false memories was described by Elizabeth Loftus and colleagues (e.g., Loftus 2003) who demonstrated the ease with which fake memories can be generated. In a typical experiment they asked subjects who had visited Disneyland before to evaluate advertisements and answer questions about their trip to Disneyland. The first group of subjects received an ad about the theme park that did not mention any cartoon characters. The second group read the same text while a four-foot-tall cardboard figure of Bugs Bunny was placed in the room. The third group received a fake Disneyland ad featuring Bugs Bunny. And the fourth, double-exposure group got both the fake ad and the cardboard cutout. Afterwards all participants were asked whether they had met Bug Bunny on their visit to Disneyland and whether they had shaken his hand. A remarkable 30 percent of subjects in group 3, and 40 percent in group 4 said that they indeed have met him while only eight percent of the first group, and four percent of the second, thought they had met the rabbit in Disneyland. It seems that the mere suggestion of the cartoon figure, either in a fake ad and/or as life-size cardboard figure was enough to convince many of the participants of having met him-although Bugs Bunny is a Warner Bros. cartoon character and would never be featured at a Disney park. In another experiment Loftus and Pickrell (1995) succeeded in convincing about 25% of their test subjects that they had been lost in a shopping mall as children even though this had never happened. Without doubts, such “memory distortions” have a great impact on assessing the reliability of eyewitness testimonies.
In addition to these “confabulation” (defined as fantasies that have unconsciously modified or even replaced facts in memory based on past-event suggestions) the scientific literature knows further examples of false memories. False recognition describes the effect that subjects claim that a novel word or event is familiar, and intrusions refer to the production of non-studied information in memory experiments (e.g., Schacter et al. 1998). The Deese/Roediger-McDermott paradigm makes these effects visible. Originally reported by Deese (1959) and replicated by Roediger and McDermott (1995), it is possible to create illusory memories for words that do not appear in the word lists subjects have to study. If the listed entries (like “dream”, “night”, “bed”, etc) are semantically associated with a critical but non-present “lure” word (such as “sleep”) participants will be biased to reproduce the lure in an immediate free recall test. The recognition rate of the non-studied word can be as high as for the words originally presented. Experiments by Seamon, Luo & Gallo (1998) went even one step further. They showed that critical lures are recognized even if the study items were presented for just 20ms, which is assumed to be below the threshold necessary for becoming consciously aware of the items. Two amazing conclusions can be drawn from these findings. First, human memory “remembers” words that have never been presented. Second, it also remembers words whose semantically associated neighboring words we don’t even consciously register. Does such a memory resemble an ordinary encoding-storage-and-retrieval device? [1]
If our memory worked like a computer’s data storage system, we would need a brain a mile in diameter, packed with nerve cells, to account for what we know... If our memory worked like the computer, we would never find our car [in the parking lot] because we never see things exactly the same way twice.
Heinz von Foerster, quoted in Segal, The dream of reality, 2001
Memory and forgetting were also one of the topics Heinz von Foerster was intrigued by. “In school,” he writes in the preface to his 2003 paper collection, “I always had difficulties remembering facts, data, lists of events: Was Cleopatra the girlfriend of Lincoln or Charlemagne or Cesar?” As a person who found it easy to visualize he soon found a mean to circumvent his “weakness”: plotting events on a timeline helped him to come to grips with historical order relations. Ultimately, aligning his contemplations with Hermann Ebbinghaus’s “forgetting curve” resulted in an early theory of his in 1948 [2] providing a quantum-mechanical interpretation of Ebbinghaus’s findings based on similarities with the half-lifetime constant of macromolecules. Ebbinghaus (1885) was the first to study memory from a psychological point of view, albeit with himself as the only test subject [3]. In assessing the retention of lists of nonsense consonant-vowel-consonant syllables [4], he discovered that within the first hour items are rapidly forgotten, and the forgetting flattens out at about 30 percent for delays of up to two days. However, Ebbinghaus was interested in how long it takes to relearn the lists; therefore he measured the time he saved to relearn the syllables. Plotting the percentage savings against time yielded a steep exponential curve. Foerster sought to replicate the behavior of this curve in mathematical terms and learned that it compares with the law of decay in physics. This gave him the idea to apply the quantum theory of microstates to the “carriers of the elementary impressions,” — or “mems” [5] as he called them, based on the analogy of genes, which were considered quantized states of complex molecules (Foerster 1949, p. 125). Drawing on Erwin Schrödinger’s (1946) quantum model of the gene, for Foerster establishing a memory content meant to lift the molecule from a lower energy-level to a higher one, while forgetting was equivalent to the process of decay.
Ebbinghaus’s nonsense syllable experiments were but a starting point for both Foerster’s quantum-mechanical theory and his life-long affection for memory. Later, when Foerster was the head of the Biological Computer Lab in Illinois, one of the funds he raised was an OEC grant on “Cognitive Memory: A Computer-Oriented Epistemological Approach to Information Storage and Retrieval,” in 1969 (e.g., Müller 2000). Before that he had authored and co-authored several publications on memory (e.g., Foerster 1965, 1967; Foerster, Inselberg and Weston 1968; Foerster 1969). He described his revised insight on the nature of memory in 1970 as follows. “Memory contemplated in isolation is reduced to ‘recording,’ learning to ‘change,’ perception to ‘input,’ and so on. In other words, in separating these functions from the totality of cognitive processes, one has abandoned the original problem and is now searching for mechanisms that implement entirely different functions which may or may not have any semblance to some processes that are subservient to the maintenance of the integrity of the organism as a functioning unit” (Foerster 1970a, pp. 135-136). This resembles closely Frederic Bartlett’s definition: “Remembering is not a completely independent function, entirely distinct from perceiving, imaging, or even from constructive thinking, but it has intimate relations with them all” (Bartlett 1932, p. 13). Such an integrative view on memory is quite different from the common-sense understanding of memory as a storage-and-retrieval system, or “fridge-theory of memory” (Riegler 2003), which assumes memory items to be perfectly stored under preserving conditions. Consequently, Foerster’s understanding went in a different direction that integrates cognition with its apparently separable functional components. Based on this understanding, I will present an alternative account for the role of memory and the implications for radical constructivism. [6]
Computing machines do not have memories...
Computers have storage systems
Heinz von Foerster, quoted in Segal, The dream of reality, 2001
Why should memory not be considered a simple “storage-retrieval mechanism”? There are two reasons that can be identified from Heinz von Foerster’s work.
First, (cognitive) psychology has been strongly influenced by the computer metaphor. In his discipline-defining book, Ulric Neisser (1967, p. 6) wrote, “The task of a psychologist trying to understand human cognition is analogous to that of a man trying to discover how a computer has been programmed.” Foerster, however, expressly warned of such a “charming metaphor” that tries to explain human cognition in terms of computing. As had already been the case with the notion of “computer,” which originally referred to human professionals performing numerical mathematics, the notion of “memory,” too, has been carelessly passed on to machines. Consequently, he referred to such comparisons as “anthropomorphizations”, i.e., “projecting the image of ourselves into things or functions of things in the outside world” (Foerster 1970b, p. 169). While “in principle there is nothing wrong with anthropomorphizations; in most cases they serve as useful algorithms for determining behavior” (Foerster 1970b, p. 170) of other systems [7], such metaphorical conceptions should be avoided. They confuse a description of a phenomenon with a description of a description of a phenomenon and lead to reification, i.e., an identification of the metaphor with the observed entity itself (Maturana 1982, pp. 15–16). Or as Foerster (1970b, p. 173) put it, “...memories of past experiences do not reproduce the causes for these experiences, but — by changing the domains of quality — transform these experiences by a set of complex processes into utterances or into other forms of symbolic or purposeful behavior [...] It is clear that a computer’s ‘memory’ has nothing to do with such transformations, it was never intended to have.”
Second, measuring a behavioral component (such as memory skills) means to focus on the observable output rather than on the internal functioning of that component. Psychologists design their remembering and recalling experiments in such a way that subjects have to memorize words and reproduce them at a later instant. This means subjects are treated as memory devices. The complexity of their behavior is reduced to a model over intput–output protocols. However, as we noticed above, false memory phenomena seem to indicate that cognition uses memory neither as data recorder nor as a dumping place of past experiences. The alternative is not to consider memory an individual and arbitrarily useable storage and retrieval mechanism but rather an indispensable part of cognition without which the latter could not function properly.
Let us consider this in greater detail. The problem appears when the psychologist assesses the output of subjects from her observer point of view. She interprets what the subjects are doing within her own referential system of understanding. If asked for Napoleon’s year of birth a student answers with “seven years before the declaration of independence” he runs the risk of failing the test the objective of which is to recite dates from memory (Foerster 1972). From the perspective of the pupil, however, it might be more attractive to remember events in terms of the American history rather than associating them with blank numbers. So what happens in a psychological study-and-recall set-up is that subjects are first fed with lists of words or whatever they are supposed to remember; then they are tested for what they can recall relative to the examiner’s referential network. Now suppose cognition utilized memory the same way the experimenter uses the subject’s memory, one would need to assume a little demon inside the cognitive apparatus, which interprets the output of the “memory module” on the backdrop of its own referential system of meaning. Of course this idea runs into a vicious circle, as we would need to assume a brain within that demon whose memory is, in turn, interpreted by yet another demon inside the demon’s brain etc.
In other words, equating human memory with a storage-retrieval mechanism does not square with the idea of cognitive systems as “constructing” entities that do not passively process incoming data but (actively) construct information in the first place. Among others, the so-called “change blindness” phenomenon makes this matter of fact clear. Daniel Simons and Daniel Levin (1998) report an experiment in which subjects failed to notice the substitution of their conversational partner. The subjects were addressed by collaborators of the experimenters who asked them for directions. After a short while two men passed between subject and interlocutor carrying a wooden door. The interruption was used to exchange the stooge with another one of quite different physical appearance. About half of the participants did not notice the radical change despite the fact that they had been looking at the original interlocutor for about a minute.
Changes in an otherwise static picture, too, will go undetected if they are presented during a blink (O’Regan et al. 2000). Test subjects are asked to view a variety of natural indoor and outdoor scenes while their eye movements were recorded. At the time of an eye blink a large change occurred either at locations of central or marginal interest. While there were differences between central and marginal locations, the authors report that in both cases more than 40% of the time the subjects failed to see the changes, even when they were directly fixating the change locations. Similar results have been obtained in previous experiments, which showed failure to detect changes occurring simultaneously with saccades, flicker, or ‘mudsplashes’ in the visual scene.
If the perception of stimuli (such as events and data that later are to be recalled in memory experiments) was the transfer of information and meaning from the observed entity to our perceptual apparatus, and further through the nervous system to the cognitive apparatus, we would indeed be in need for the mile-wide brain Foerster refers to in the quote at the beginning of this chapter. Storing pixel data of roughly 20 pictures a second would cause an immediate overrun of storage capacities. Of course, digital representations of images can be compressed thus reducing the storage need. However, pixel-oriented compression only yields reductions of about one magnitude, and vector-oriented representations need to establish a criterion of which features count as meaningful “cognitive determinant” on which compression could be based. But to find the relevant features is impossible — cf. Dennett’s (1984) robot R2D2 that failed to act, as it had to use all its cognitive capabilities to distinguish relevant information from irrelevant information.
More than 200 years ago philosopher Immanuel Kant (1781) stood before the same problem. He argued that so far “it has been assumed that all our knowledge must conform to objects” — an approach that he regarded a failure. Instead he proposed a “Copernican Turn,” according to which “objects must conform to our knowledge” (rather than the other way around), thus radically dismissing any form of determinism of the cognizing individual through the outside reality (see also Bettoni 1997).
In order to implement the Copernican Turn we refer to what Foerster called the Principle of Undifferentiated Encoding in the nervous system. “The response of a nerve cell does not encode the physical nature of the agents that caused its response. Encoded is only ‘how much’ at this point on my body, but not ‘what’” (Foerster 1973, pp. 214–215). Humberto Maturana and Francesco Varela enlarged this argument to what they call the organizational closure of the nervous system, which is “a closed network of interacting neurons such that any change in the state of relative activity of a collection of neurons leads to a change in the state of relative activity of other or the same collection of neurons” (Winograd and Flores 1986). Therefore, the cognitive apparatus must construct its reality and the entities it is populated with in the first place. Perturbations from the outside may, at best, modulate the dynamical construction process of the cognitive apparatus but not determine it. There is no purpose attached to this dynamics, no goals imposed from the outside relative to the cognitive apparatus.
In other words, the cognitive apparatus predetermines what to perceive. Its dynamics follows the “constructivist-anticipatory principle” (Riegler 1994): It constructs cognitive structures in the first place and seeks occasionally to validate them through sensory input. Riegler (2001a) compares this with a relay race where the runners focus on their running except for the short moments of coordination when they pass the baton on to the next runner. One could describe the moments of coordination as “checkpoints” (Riegler 1994) where the runner verifies that she is still on track such that the race can go on with the subsequent team member. Oliver Sacks’s (1995) example of a blind man demonstrates that humans rely on such relay race-like cognitive strategies. All his life as a blind man the tactile aspect of his world had priority. He recognized things by feeling their surface in a particular order. When walking through a familiar place he did not get lost because he relied on a certain sequence of tactile impressions he would encounter. This applies to visual perception as well. For Kevin O’Regan and Alva Noë (2001) “seeing is knowing sensorimotor dependencies”, and the brain is a device to extract algebraic structures between perception and action.
This view has gained considerable acceptance in the literature. Susan Oyama (1985) and William Clancey (1995) assert that information is not ‘retrieved’ but rather ‘created’ by the system. Dennett (1991) claims that our brains hold only a few salient details, while the brain fills in the rest from memory. O’Regan (1992) concludes that “we only see what we attend to,” and according to Gerhard Roth (quoted in Pörksen 2004, p. 121) the “re-enactment of the image, released by only a few sign stimuli, is far quicker than it would be if the eye had to scan the environment atomistically every single time.” However, based on the insights of Foerster, Maturana, and Varela, radical constructivism goes one step further and claims that the cognitive apparatus constructs its reality without “knowing” that these inputs come from the sensory surface as there is no way to distinguish sensory signals from any other nervous signal. The idea of a sensory signal is only reconstructed a posteriori. In this approach the notion of memory gets a whole new dimension, which will be discussed in the following chapter.
A memory is a concept that reconstructs or stabilizes or reproduces other concepts.
Gordon Pask [8], in Cybernetics of cybernetics, 1974, p. 312
From what has been explicated so far it follows that a constructive cognitive entity is a system that creates its own world rather than maps structures of reality onto its cognitive substratum. It is an inherently active system, one which first invents things, and tries afterwards to figure out whether the a priori structures make sense when, for example, interacting with other systems. Therefore, for a constructively working entity it is useless to assume that memory consists of storage and retrieval. As it does not “map” reality onto the cognitive structure it “stores” neither external data and events nor internal experiences. Rather, the function of memory is to compress sequences of constructed cognitive structures into compounds that can be readily accessed afterwards. So whenever in our stream of experience we encounter key patterns that match the “tags” of memorized compounds — the “relative activity of a collection of neurons” in the sense of Maturana and Varela — , the compounds serve as an input for cognition, rather than experiential stimuli from the environment.
This intertwining of cognition and memory is tight [9]; separating them on the functional [10] level is impossible. One could argue that in a certain sense computer memory without methods of accessing it is no memory either. Anybody with a crashed hard disk can confirm this: the data is still “there” but beyond access. However, the analogy is far from being perfect; in human cognition there are no “data” storages. Knowledge about memorized events, dates, etc. resides in the dynamical structure of cognition. As such it resembles to a certain degree the mnemonic technique of orators in ancient Rome: they recalled data while virtually walking a familiar path where individual landmarks triggered the remembrance of data. While you are walking, you as a self-observer reflect on your doing, thereby identifying familiarities. That is, you find the anticipations of your cognitive schemata confirmed by the present experience. Landmarks in this sense are the points where your schemata request confirmation whether they are still on track. Consequently, what psychological memory experiments reveal is not the nature of memory but an epiphenomenon that emerges as the cognitive activity of test subjects is coerced into the conditions of an experimental intput–output protocol.
To summarize, the basic functioning of a constructivist entity is as follows: (1) Its cognitive apparatus creates a structural component; (2) It interacts with other structures such as its surrounding environment or with older structure of its apparatus; (3) It compares the newly created structures with these structures encountered in step 2; and (4) it modifies the structures if needed, before returning to step 1. Anthropomorphically speaking, the cognitive apparatus tries to figure out whether the a priori structures make sense.
What does “making sense” mean? Do the constructed structures correspond to the environment? As Glasersfeld (1983) notes, it is a matter of “fitting” rather than “matching,” i.e., correspondence. To make his point clear, Glasersfeld provides us with a thought experiment. Suppose that, for the first time, you encounter the word “mermaid.” You are instructed that a mermaid is a hybrid creature between woman and fish. Consequently, you construct a representation out of already known elements, which are associated with the concepts “woman” and “fish” yielding a composite that is a fishtailed biped. As such it does not resemble the creature of the sea as featured in Disney movies. You go on reading tales about mermaids without getting into conflict with your aberrant notion until you see a picture of a mermaid. Only then you will recognize that you have been using a “wrong” concept.
For a constructivistly functioning entity “making sense” means: Adding the structure in question to the ongoing cognitive process will not result in a contradiction with a certain set of sensory data. As Glasersfeld’s mermaid example shows: As long you can go along with the concept and do not bump into contradictions, it does not matter having a deviant concept of what a mermaid is. The idea of the mermaid is constructed. It does not mirror any (memorized) experience. Since constructions are primary this must apply to everything in memory.
From this perspective, memory is but an aid to cognition. It helps to conserve the mind’s constructions, and to make them available in order to speed up cognition, which otherwise would be hampered by the slow working of the perceptual apparatus. The proximity between memory and action was, among others, demonstrated by Linda Henkel et al. (2000). Their findings suggest that repeated exposure of test subjects to certain objects and events through sight, sound or just imagination influences their memories of where, how, and if at all they experienced a certain event.
What we cannot think, that we cannot think:
we cannot therefore say what we cannot think.
Ludwig Wittgenstein, Tractatus logico-philosophicus, 1922, §5.61
Despite the growing conceptual argumentation and experimental evidence in its favor, one might be reluctant to subscribe to the radical constructivist perspective. At first glance, adopting this approach seems equal to saying we can construct without limits. For example, what prevents the reader from constructing the fact of reading this article in this very moment and flying over the Grand Canyon an instant later? Obviously there are limits to how the cognitive apparatus constructs reality.
In order to resolve the problem let us have a closer look at radical constructivism. As has been discussed in Riegler (2001b) in greater detail, it rests on four claims.
1. The Radical Constructivist Postulate says that reality is brought forth by an organizationally closed cognitive apparatus (see arguments of Foerster, Maturana, and Varela above). Maturana and Varela (1987) compare the mind with a submarine navigator who uses the instruments rather than sight to navigate the vessel. He only knows about internal meters, levers, and knobs rather than external reefs and currents. This illustration gives rise to the assumption that memory, too, doesn’t “know” about external data and events. It may or may not reflect entities in the real world. Therefore, its purpose is not to represent them. Furthermore, memory engrams are no “frozen pieces of experience” but elements in the continuously self-transforming cognitive activity of the mind, which is free of externally, determined goals.
2. The Epistemological Corollary claims that an absolute mind-independent reality must neither be rejected nor confirmed as we, following the skeptical route to radical constructivism (Glasersfeld 1991), do not have any privileged access to reality independent of our senses. This renders any correspondence theory of representation impossible (Peschl & Riegler 1999). Consequently, memory cannot be considered a sort of canvas on which perception paints; there are no neuron clusters whose activations correlate with external events in a stable referential manner.
3. The Methodological Corollary claims the circularity of knowledge as there is no outside point of reference (even though, we can always think as if it was possible to relate truth to configurations in a projected mind-independent reality). Rather knowledge is manifest in a dynamical network of constituent components that support each other. [11] Only a snapshot of this network can be referred to as “memory.” However, this must remain an incomplete picture; memory does not resembles static computer memory nor was remembering isolated facts an evolutionary advantage in an ever-changing environment [12]. As pointed out before, memory constitutes itself in the ongoing dynamics of cognitive processes; it is not only topologically but also dynamically distributed.
4. Due to the inherent properties of network-like structures, the Limitation of Construction Postulates rejects the idea of unconstrained “Grand Canyon”-constructions, thus saving radical constructivism from an unrestricted “anything goes.” As the network is subject to growth over time it develops hierarchical interdependencies, which canalizes future expansion. Changing one component in this network necessarily changes the context of other elements therefore imposing constraints on each other. Questions like “Does this table in front of me exist?” or “Do you believe you can walk through this closed door?” are considered isolated linguistic constructs. As Siegfried Schmidt (quoted in Pörksen 2004, p. 134) put it, “For if I want to know whether this table exists, there already has to be a table in my experiential reality I can deal with. The question of whether this table exists or not is an assertion that neither adds to, nor subtracts from, existence.” That we can isolate the concept of table from its defining (dynamical-operational) context — to abstract from its embeddedness (Riegler 2002) — is a remarkable feat of language only, yet it does not make sense on the level of experiences.
The construction network of the mind is necessarily non-arbitrary as it follows the canalizations that result from the mutual interdependencies among constructive components. Once a certain direction is taken relating components to each other in a particular manner, the cognitive apparatus uses previous constructions as building blocks for further constructions. Consequently, the idea of a door and the experience of either passing through it or bumping into it are mental constructs that are dependent on other constructs — some of which we might not even be aware of. Only on a meta-level, we can single out constituents of the composite constructions and fall prey to the thought that we could deal with each component separately, or that we could change the features of isolated entities as if those features did not depend on other elements.
Furthermore, the hierarchical nature of the network also introduces various degrees of changeability of constructions depending on the number of other components that relate to the component in question. Aspects that relate to more recently added cognitive layers (e.g., scientific problems) can be more easily re-arranged (and therefore solved) than psychological problems for which you consult a therapist in order to “reframe” the respective part of the network. Even older elements, which we constructed in our early sensorimotor days, might turn out entirely inaccessible for changes.
How does sensorimotor activity account for the construction of objects, which literally “stand against” [13] us such as doors and tables? Here is where we meet Heinz von Foerster again who pointed out that “what is referred to as ‘objects’... in an observer-excluded (linear, open) epistemology, appears in an observer-included (circular, closed) epistemology as ‘token for stable behavior’” (Foerster 1976, p. 261), or “attractors” in the terminology of complexity research. He argued these attractors or eigen-behaviors results from the recursion of accounting for the changes in an organism’s sensations by its actions that in turn are described in terms of its sensations. Therefore, that which appear to us as objects are equilibria that determine themselves through circular processes. They reside “exclusively in the subjects own experience of his or her sensorimotor coordination” (Foerster 1976, p. 266).
Evidently, we arrive at the tension between experiential reality (or operational competence) and the ability to reason and (philosophically) argue about it in language. This is also the starting point for an experiment that corroborates the fourth postulate. It is an experiment with a “magical touch,” and as such it would have delighted Heinz von Foerster. [14] Gabrielle Simcock and Harlene Hayne (2002) were dissatisfied with the lack of definitive answers to childhood amnesia, i.e., the phenomenon that we forget about our earliest childhood experiences up to the age of three years. To shed light on the problem, they designed the Magical Shrinking Machine, a device that apparently turns big toys into small ones — an event that the authors considered long-lasting in the memories of their two- and three-years old test subjects. Six months or one year after an initial demonstration of the machine they returned to the children in order to assess both their verbal (based on open-ended and direct questions) and non-verbal (i.e., photographical recognition and behavioral re-enactment) accounts of that event. While the children scored high for the non-verbal part of the experiment, they could describe the event only with the limited language they knew at the time, even though their vocabulary has grown considerably since then. Their verbal descriptions of the event were “frozen in time, reflecting their verbal skill at the time of encoding, rather than at the time of the test” (Simcock & Hayne, p. 229). Cognitive development seems to resemble a ratchet (Riegler 2001a): Once the individual starts to reason in language it cannot reach back to unconscious procedural memories. Although the result addresses primarily the origin of autobiographical memory one has to ponder whether it applies to implicit procedural memory as well, which deals with actions like walking through doors and perceiving tables. If humans cannot translate their preverbal memories into language, how can basic sensorimotor constructs made in that early period be reasoned about and claimed to be part of a mind-independent reality?
Memory: The irreducible uncertainty of an observer with incomplete knowledge of the present internal state of a non-trivial machine (say, a living organism), which the observer interprets as a property of the machine.
Heinz von Foerster, in Cybernetics of cybernetics, 1974
The framework of radical constructivism suggests refraining from separating cognition and memory. What we usually refer to as “memory” is but the expression of a static snapshot of otherwise dynamical cognitive processes. Recalling these snapshots in isolation, as psychology does, leads to the riddle of memory failure and false memories. That we can use memory for feats like remembering important dates in history compares to riding a horse: Evolution did not evolve horses with the goal to be ridden by humans. Instead the transportation capability of horses is a trivialization of the biological complexity of these animals. According to Foerster (1972), “trivialization” is the method by which we reduce the degrees of freedom of a given complex entity to behave like a trivial machine, i.e., an automaton that maps input directly on output without recurring to any internal state. Similarly humans discovered how to tame wild horses and to turn them to practical use; the rider provides the horse with commands (input), the horse carries them out (output). However, since horses are (nevertheless) complex biological systems we must not be surprised that from time to time a horse throws its rider. Likewise we cannot expect from trivialized memory to function like an intput–output storage device either. Confabulations, for example, arise when a “landmark request” in a particular cognitive schema is unspecific enough to activate a deviating chain of schemata. A child who remembers only the fact that she received “something” from her parents may later interpret this “something” as a teddy bear even though it was originally a candy.
Memory, as the dynamical network of constructive components, is not geared toward reproducing “true” facts. Rather, the goal of cognition is to produce structure that maintains coherence with the rest of the network. Therefore, misleading post-event suggestions cause distortions of the network the cognitive apparatus has to compensate for. In this regard it is not surprising that free imagination of the alleged event significantly increases the ratio of false recalls (e.g., Garry et al. 1996). As pointed out before, the unconstrained constructive activity of the mind is its ‘normal mode.’ Imagining is the cognitive process itself undisturbed by sensory stimuli. Therefore, we can expect that free imagination distorts the cognitive network much stronger — cf. the warning in Loftus (2003) of the suggestive techniques of psychotherapists — than sensory stimuli as the latter have only perturbating character. In this sense, false memories (re)considered in the light of radical constructivism lose their adjective “false.”
Atkinson, R. C. & Shiffrin, R. M. (1968) Human memory: A proposed system and its control processes. In: Spence, K. W. (ed.) The Psychology of Learning and Motivation: Advances in Research and Theory. New York: Academic Press, pp. 89–195.
Baddeley, A. (2001) Memory. In: Wilson, R. A. & Keil, F. C. (eds.) (2001) The MIT Encyclopedia of the Cognitive Sciences. MIT Press.
Bartlett, F. C. (1932) Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.
Berners-Lee, T., Hendler, J. & Lassila, O. (2000) The Semantic Web. Scientific American May: 34–43.
Bettoni, M. C. (1997) Constructivist foundations of modeling: A Kantian perspective. International Journal of Intelligent Systems 12: 577–595.
Bruner, J. (1973) Going Beyond the Information Given. New York: Norton.
Clancey, W. J. (1991) Review of Rosenfield’s “The Invention of Memory.” Artificial Intelligence 50(2): 241–284.
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior 11: 671–684.
Dali, S. (1942) The Secret Life of Salvador Dali. New York: Dial Press.
Dawkins, R. (1976) The Selfish Gene. Oxford University Press.
Deese, J. (1959) On the prediction of occurrence of particular verbal intrusions in immediate recall. Journal of Experimental Psychology 58: 17–22.
Dennett, D. C. (1984) Cognitive Wheels: The Frame Problem of AI. In: Hookway, C. (ed.) Minds, Machines, and Evolution: Philosophical Studies, London: Cambridge University Press, pp. 129–151.
Dennett, D. C. (1987) The Intentional Stance. Cambridge, MA: MIT Press.
Dennett, D. C. (1991) Consciousness Explained. London: Little, Brown & Co.
Dodson, C. S., Koutstaal, W. & Schacter, D. L. (2000) Escape from illusion: Reducing false memories. Trends in Cognitive Science 4: 391–397.
Ebbinghaus, H. (1885) Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie. Leipzig: Duncker & Humblot. English translation: Ebbinghaus, H. (1913) Memory. A Contribution to Experimental Psychology. New York: Teachers College, Columbia University.
Foerster, H. von (1948) Das Gedächtnis. Eine quantenphysikalische Untersuchung. Wien: Franz Deuticke.
Foerster, H. von (1949) Quantum Mechanical Theory of Memory. In: Foerster, H. von (ed.) Cybernetics: Circular Causal, and Feedback Mechanisms in Biological and Social Systems. New York: Josiah Macy Jr. Foundation, pp. 112–145. Reprinted in: Pias, C. (ed.) Cybernetics - Kybernetik. Zürich: Diphanes, pp. 98–121.
Foerster, H. von (1965) Memory Without Record. In D. P. Kimble (ed.) Learning, Remembering and Forgetting, Vol. 1: The Anatomy of Memory, Science and Behavior Books; Palo Alto, California, pp. 388–433. Reprinted in: Foerster, H. von (1981) Observing Systems. Seaside, CA: Intersystems Publications, pp. 91–138.
Foerster, H. von (1967) Time and Memory. In: Fischer, R. (ed.) Interdisciplinary Perspectives of Time. New York: New York Academy of Sciences, pp. 866–873. Reprinted in: Foerster, H. von (1981) Observing Systems. Seaside, CA: Intersystems Publications, pp. 91–138. 139–148.
Foerster, H. von, A. Inselberg & P. Weston (1968) Memory and Inductive Inference. In: Oestreicher, H. & Moore, D. (eds.) Cybernetic Problems in Bionics. Bionics Symposium 1966. New York: Gordon and Breach Science Publishers, pp. 31–68.
Foerster, H. von (1969) What is Memory that It May Have Hindsight and Foresight as Well? In S. Bogoch (ed.) The Future of the Brain Sciences: Proc. of the 3rd International Conference. New York: Plenum Press, pp. 19–64. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 101–131. [Page numbers in the text refer to the reprint].
Foerster, H. von (1970a) Molecular Ethology, an Immodest Proposal for Semantic Clarification. Originally published in: Ungar, G. (ed.) Molecular Mechanisms in Memory and Learning, Plenum Press, New York, pp. 213–248. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 133–168. [Page numbers in the text refer to the reprint].
Foerster, H. von (1970b) Thoughts and Notes on Cognition. In: P. Garvin (ed.) Cognition: A Multiple View. New York: Spartan Books, pp. 25–48. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 169–190. [Page numbers in the text refer to the reprint].
Foerster, H. von (1972) Perception of the Future and the Future of Perception. Instructional Science 1 (1): 31–43. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 199–210. [Page numbers in the text refer to the reprint].
Foerster, H. von (1973) On constructing a reality. In: F. E. Preiser (ed.) Environmental Design Research, Vol. 2. Dowden, Hutchinson & Ross, Stroudberg, pp. 35–46. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 211–228. [Page numbers in the text refer to the reprint].
Foerster, H. von (ed.) (1974) Cybernetics of cybernetics, or the control of control and the communication of communication. University of Illinois. Republished in 1995 by Stephen A. Carlton with Future Systems in Minneapolis.
Foerster, H. von (1976) Objects: Tokens for (Eigen-)Behaviors, ASC Cybernetics Forum 8: 91–96. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 261–271. [Page numbers in the text refer to the reprint].
Foerster, H. von & Schröder, P. (1993) Introduction to Natural Magic. Systems Research 10: 65–79. Reprinted in: Foerster, H. von (2003) Understanding Understanding. New York: Springer, pp. 325–338. [Page numbers in the text refer to the reprint].
Foerster, H. von (2003) Understanding understanding. NewYork: Springer.
Garry, M. et al. (1996) Imagination Inflation: Imagining a Childhood Event Inflates Confidence that it Occurred. Psychonomic Bulletin & Review 3(2): 208–214.
Glanville, R. (2003) Heinz von Foerster. Systems Research and Behavioral Science 20: 85–89.
Glasersfeld, E. von (1983) Learning as a constructive activity. In: Bergeron, J. C. & Herscovics, N. (eds.) Proceedings of the Fifth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education. Montreal: University of Montreal, pp. 41–69.
Glasersfeld, E. von (1991) Knowing without Metaphysics: Aspects of the Radical Constructivist Position. In: Steier, F. (ed.) Research and Reflexivity (Inquiries into Social Construction). London: Sage Publications, pp. 12–29.
Henkel, L. A., Franklin, N. & Johnson, M. K. (2000) Cross-modal confusions between perceived and imagined events. Journal of Experimental Psychology: Learning, Memory, & Cognition 26: 321–335.
James, W. (1890). The Principles of Psychology. New York: Holt, Rinehart and Winston.
Kant, I. (1781) Kritik der reinen Vernunft. Vorrede zur zweiten Ausgabe. Leipzig: Reclam jun. 1781, pp. 21–25. English translation: Critique of Pure Reason.
Loftus, E. (2003) Our changeable memories: Legal and practical implications. Nature Review Neuroscience 4(3): 231–234.
Loftus, E. & Pickrell, J. (1995) The formation of false memories. Psychiatric Annals 25: 720–725.
Maturana, H. R. (1982) Erkennen: Die Organisation und Verkörperung von Wirklichkeit. Brunschweig: Vieweg.
Maturana, H. R. & Varela, F. J. (1987) The Tree of Knowledge. Boston MA: Shambhala.
Müller, A. (2000) Eine kurze Geschichte des BCL. Österreichische Zeitschrift für Geschichtswissenschaften 11: 9–30.
Neisser, U. (1967) Cognitive Psychology. New York: Meredith.
O’Regan, J. K. (1992) Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology 46: 461–488
O’Regan, J. K., Deubel, H., Clark J. J. & Rensink, R. A. (2000) Picture changes during blinks: looking without seeing and seeing without looking. Visual Cognition 7: 191–212.
O’Regan, J. K. & Noë, A. (2001) What it is like to see: A sensorimotor theory of perceptual experience. Synthese 129: 79–103.
Oyama, S. (1985) The Ontogeny of Information: Developmental Systems and Evolution. Cambridge University Press: Cambridge MA. Republished in 2000.
Pask, G. (1975) Conversation, Cognition, and Learning. New York: Elsevier.
Peschl, M. & Riegler, A. (1999) Does Representation Need Reality? In: Riegler, A., Peschl, M. & Stein, A. von (1999) Understanding Representation in the Cognitive Sciences. Kluwer Academic / Plenum Publishers: New York, pp. 9–17.
Pörksen, B. (2004) The Certainty of Uncertainty. Exeter: Imprint. German original appeared in 2001.
Riegler, A. (1994) Constructivist artificial life: The constructivist-anticipatory principle and functional coupling. In: Hopf, J. (ed.) Workshop on Genetic Algorithms within the Framework of Evolutionary Computation. Max-Planck-Institute Report No. MPI-I-94–241, pp. 73–83.
Riegler, A. (2001a) The cognitive ratchet. The ratchet effect as a fundamental principle in evolution and cognition. Cybernetics and Systems 32: 411–427.
Riegler, A. (2001b) Towards a radical constructivist understanding of science. Foundations of Science, special issue on “The Impact of Radical Constructivism on Science” 6: 1–30.
Riegler, A. (2002) When is a cognitive system embodied? Cognitive Systems Research, special issue on “Situated and Embodied Cognition,” edited by T. Ziemke, 3: 339–348.
Riegler, A. (2003) Memory ain’t no fridge: A constructivist interpretation of constructive memory. In: Kokinov, B. & Hirst, W. (eds.) Constructive Memory. Sofia: NBU Series in Cognitive Science, pp.277–289.
Roediger & McDermott (1995) Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, & Cognition 21(4): 803–814.
Sacks, O. (1995) An Anthropologist on Mars. New York: Alfred A. Knopf.
Schrödinger, E. (1944) What Is Life? Cambridge: Cambridge University Press.
Scott, B. (2001) Gordon Pask’s conversation theory: A domain independent constructivist model of human knowing. Foundations of Science, special issue on “The Impact of Radical Constructivism on Science” 6: 343–360.
Seamon, J. G., Luo, C. R., & Gallo, D. A. (1998) Creating false memories of words with or without list item recognition: Evidence for nonconscious processes. Psychological Science 9: 20–26.
Segal, L. (2001) The Dream of Reality. Second edition. New York: Springer. Originally published in 1986 with Norton: New York.
Simcock, G. & Hayne, H. (2002) Breaking the barrier? Children fail to translate their preverbal memories into language. Psychological Science 13 (3): 225- 231.
Simons, D. J., & Levin, D. T. (1998) Failure to detect changes to people in a real-world interaction. Psychonomic Bulletin and Review 5: 644–649.
Squire, L. R. (1987) Memory and Brain. New York: Oxford University Press
Tulving, E. (1972) Episodic and semantic memory. In: Tulving, E. and Donaldson, W. (eds.) Organization of Memory. New York: Academic Press, pp. 381–403.
Winograd, T. & Flores, F. (1986) Understanding Computers and Cognition. Ablex: Norwood.
Wittgenstein, L. (1922) Tractatus Logico-philosophicus. London: Routledge.
Wittgenstein, L. (1953) Philosophical Investigations. Oxford: Basil Blackwell.