Scholarly article on topic 'The body of knowledge: On the role of the living body in grounding embodied cognition'

The body of knowledge: On the role of the living body in grounding embodied cognition Academic research paper on "Psychology"

CC BY-NC-ND
0
0
Share paper
Academic journal
Biosystems
OECD Field of science
Keywords
{Allostasis / "Cognitive systems" / Grounding / Homeostasis / "Embodied AI" / "Embodied cognition" / Emotion / Intentionality / "Predictive regulation" / Representation}

Abstract of research paper on Psychology, author of scientific article — Tom Ziemke

Abstract Embodied cognition is a hot topic in both cognitive science and AI, despite the fact that there still is relatively little consensus regarding what exactly constitutes ‘embodiment’. While most embodied AI and cognitive robotics research views the body as the physical/sensorimotor interface that allows to ground computational cognitive processes in sensorimotor interactions with the environment, more biologically-based notions of embodied cognition emphasize the fundamental role that the living body – and more specifically its homeostatic/allostatic self-regulation – plays in grounding both sensorimotor interactions and embodied cognitive processes. Adopting the latter position – a multi-tiered affectively embodied view of cognition in living systems – it is further argued that modeling organisms as layered networks of bodily self-regulation mechanisms can make significant contributions to our scientific understanding of embodied cognition.

Academic research paper on topic "The body of knowledge: On the role of the living body in grounding embodied cognition"

Accepted Manuscript

Title: The body of knowledge: On the role of the living body in grounding embodied cognition

Author: Tom Ziemke

PII: DOI:

Reference:

S0303-2647(16)30168-X

http://dx.doi.org/doi:10.1016/j.biosystems.2016.08.005 BIO 3690

To appear in:

BioSystems

Received date: 6-8-2016

Accepted date: 9-8-2016

Please cite this article as: Ziemke, Tom, The body of knowledge: On the role of the living body in grounding embodied cognition.BioSystems http://dx.doi.org/10.1016Zj.biosystems.2016.08.005

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

The body of knowledge: On the role of the living body in grounding embodied cognition

Tom Ziemke

Interaction Lab School of Informatics University of Skövde 54128 Skövde, Sweden tom .ziemke@his.se

Cognition & Interaction Lab Human-Centered Systems Division Department of Computer & Information Science Linköping University 58183 Linköping, Sweden tom.ziemke@liu.se

Abstract

Embodied cognition is a hot topic in both cognitive science and AI, despite that fact that there still is relatively little consensus regarding what exactly constitutes 'embodiment'. While most embodied AI research views the body as the physical/sensorimotor interface that allows to ground computational cognition processes in sensorimotor interactions with the environment, more biologically-based notions of embodied cognition emphasize the fundamental role that the living body - and more specifically its homeostatic/allostatic self-regulation - plays in grounding both sensorimotor interactions and embodied cognitive processes. Adopting the latter position, a multi-tiered affectively embodied view of cognition in living systems, it is further argued that synthetic biology research can make significant contributions to modeling/understanding the nature of organisms as layered networks of bodily self-regulation mechanisms.

Keywords: Allostasis; Grounding; Homeostasis; Embodied AI; Embodied cognition; Emotion; Intentionality; Predictive regulation; Representation; Robots; Synthetic biology.

1. Introduction

At some point in the long and winding write-up of this paper, its title was "What makes embodiedAI embodied? ". That title eventually disappeared again, but the question is still highly relevant to this paper - and the answer is not as straightforward as one might think. Robots are, no doubt, considered 'embodied' by most AI researchers, and in fact the obvious AI approach to modeling natural embodied cognition or synthesizing artificial equivalents thereof (e.g. Morse et al., 2011). Much embodied AI research, however, also makes use of simulated robots or other types of non-physical agents, e.g. so-called virtual agents or different types of more abstract artificial-life agents. Hence, one might ask (cf. Ziemke, 2004) whether embodied AI really is about embodied (i.e. physical, robotic, etc.) models of cognition, or rather about models - any type of model: robotic ones obviously, but also purely computational ones - of embodied cognition (whatever that is), or maybe both? If you find that question somewhat confusing, you are not alone. As discussed in more detail in Section 2, despite more than 25 years of research on embodied cognition and AI, and by now a number of books on the topic (e.g. Varela et al., 1991; Pfeifer & Scheier, 1999; Gallagher, 2005; Ziemke et al., 2006; Johnson, 2007; Thompson, 2007; Shapiro, 2010; Lindblom, 2015), there still is a perplexing diversity of notions of embodied cognition as well as claims concerning its nature and relevance.

Given that this paper is part of a journal special issue on the relation between embodied AI and synthetic biology, it should come as no surprise that it is argued here that synthetic biology research might be able to make significant contributions to embodied AI, and thereby also might help to clarify the role that biological embodiment plays in natural cognition. To what degree the underlying biological mechanisms really do play a role in cognitive processes and capacities, is another open question in the cognitive sciences, and in fact not everybody would agree that they actually do play any role at all, other than that of a particular physical implementation that could just as well be replaced by another, non-biological - e.g. computational and/or robotic - implementation. Different arguments supporting the view that the underlying biology in general, and bodily self-regulation in particular, actually does play a crucial role in embodied cognition are discussed in more detail in Section 3.

Section 4 then, finally, presents some discussion and conclusions. It will be argued that embodied cognition is not only grounded in sensorimotor interaction with the environment - a claim that most proponents of embodied cognition, and even some of its opponents, would agree to - but that at least natural cognition is furthermore also deeply rooted in the underlying biological mechanisms, and more specifically layered/nested networks of homeostatic/allostatic bodily self-regulation mechanisms. Hence, the potential contribution of synthetic biology to embodied cognition and AI, it will be argued, lies first and foremost in modeling/understanding/synthesizing the nature of organisms as such layered networks. This would be an important complement to current work in embodied AI and cognitive architectures/robotics, much of which is

predominantly concerned with layered architectures for dealing with the complexities of perceiving and acting in the external environment.

2. What's that Thing Called Embodiment?

The embodied approach in cognitive science and AI has received increasingly much attention in recent years. In fact, "Embodied Cognition is sweeping the planet", at least according to Fred Adams' backcover book endorsement of the paperback edition of Shapiro's (2010) book on the topic. Research on embodied cognition has received significant attention in the cognitive sciences for at least 25 years now, if you count from the appearance of Varela, Thompson and Rosch's book "The Embodied Mind" in 1991. It should be noted though that despite this, at least at this point in time, there actually is no such thing as the embodied mind thesis or paradigm. This is reflected, for example, by recent paper titles such as "Embodied cognition is not what you think it is" (Wilson & Golonka, 2013) and recent debates about the alleged "poverty of embodied cognition" (Goldinger et al., 2016; Killeen, 2016) that reveal deep misunderstandings and wildly different (mis-) conceptions of even the most basic tenets of embodied cognition research.

From the embodied AI researcher's perspective, on the other hand, what is and what is not embodied might seem relatively straightforward: the computer programs of traditional AI research are widely considered 'disembodied', whereas robots obviously are embodied - at least in some sense (cf. Ziemke, 2001b; Ziemke & Thill, 2014). Much early embodied AI research was to some degree driven by criticisms of traditional AI formulated by philosophers such as Dreyfus (1979), Searle (1980) and Harnad (1990). A key point in these criticisms was the lack of interaction between the internal representations - at the time typically symbolic ones - of AI programs and the external world they were supposed to represent. Dreyfus (1979), for example, inspired by Heidegger's notion of being-in-the-world, argued that any computer program "is not always-already-in-a-situation. Even if it represents all human knowledge in its stereotypes, including all possible types of human situations, it represents them from the outside ... It isn't situated in any one of them, and it may be impossible to program it to behave as if it were". Searle's (1980) criticism of computational AI systems, based on his famous Chinese Room Argument, was that "the operation of such a machine is defined solely in terms of computational processes over formally defined elements", and that such "formal properties are not by themselves constitutive of intentionality" - which is the characteristic of human cognition that allows it to be about the world. Harnad's (1990) argument was based on Searle's, but he referred to the problem of intentionality as a lack of 'intrinsic meaning' in purely computational systems, which he argued could be resolved by what he termed symbol grounding, i.e. the grounding of internal symbolic representations in sensorimotor interactions with the environment.

Embodied approaches to AI - using robotic or simulated 'autonomous agents' - at least at a first glance, allow computer programs and the representations they are using, if any, to be grounded in interactions with the physical environment through the robot/agent

platform's sensorimotor capacities. Brooks, for example, one of the pioneers of embodied AI, formulated what he called "the two cornerstones of the new approach to Artificial Intelligence, situatedness and embodiment" (Brooks, 1991). Embodiment from this perspective simply means that "robots have bodies and experience the world directly -their actions are part of a dynamic with the world and have immediate feedback on their own sensations" (Brooks, 1991). According to Brooks, such systems are physically grounded, and hence internally "everything is grounded in primitive sensor motor patterns of activation" (Brooks, 1993). Situatedness, accordingly, means that "robots are situated in the world - they do not deal with abstract descriptions, but with the here and now of the world directly influencing the behavior of the system" (Brooks, 1991).

Hence, from the embodied AI perspective, things might seem relatively uncomplicated: robots are embodied and situated in roughly the same sense that humans and other animals are, and thereby they at least potentially can overcome traditional Al's problems with intentionality or intrinsic meaning. The problem of computer programs dealing with ungrounded representations is solved through physical grounding and either not having any representations at all (a la Brooks) or acquiring internal representations through symbol/representation grounding (a la Harnad), i.e. developing such representations in the course of interaction with the external world (e.g. learning a map of the environment). It should be noted though that this does not necessarily resolve the philosophical problems discussed above. Searle, for example, already back in 1980, presented - and rejected - what he called what he called the 'robot reply' to his own Chinese Room Argument. This entailed pretty much exactly what is now called embodied AI, namely computer programs running inside robots that interact with their environment through sensors and actuators. In the terms of Searle's argument, to the person inside the Chinese Room, it does not make any difference whether or not inputs to and outputs from the Chinese Room are connected to the sensors and motors of a robot - the person inside the room still lacks the intentionality that characterizes human cognition.

At this point it should be noted that for the purposes of this paper it does not actually matter at all whether or not the reader is familiar with the details of Searle's Chinese Room Argument, let alone convinced of its validity. The argument has been discussed for more than 35 years now (e.g. Harnad, 1989, 1990; Ziemke, 1999; Zlatev, 2001; Preston & Bishop, 2002) without reaching much consensus. What is more interesting here though is that there are quite many embodied AI researchers who - like Searle - take the Chinese Room Argument to be a valid argument against traditional AI, but at the same time -unlike Searle - consider the physical and sensorimotor embodiment provided by current robots to be sufficient to overcome the problem (e.g. Harnad, 1989, 1990; Brooks, 1991, 1993; Zlatev, 2001; cf. Ziemke, 1999). In Harnad's (1989) terms, this type of embodied AI has gone from a computational functionalism to a robotic functionalism. Zlatev (2001), for example, explicitly formulated the functionalist position that there is "no good reason to assume that intentionality is an exclusively biological property (pace e.g. Searle)", and "thus a robot with bodily structures, interaction patterns and development similar to those of human beings ... could possibly recapitulate [human] ontogenesis, leading to the emergence of intentionality, consciousness and meaning". Others, including Searle naturally, do indeed believe that there are good reasons to assume that

human-like - or, more generally, organism-like - intentionality is in fact a biological property, and that it does in fact require a biological body (e.g. Varela et al., 1991; Varela, 1997; Ziemke, 2001a, 2001b, 2007, 2008; Sharkey & Ziemke, 2001; Zlatev, 2002; Bickhard, 2009; Froese & Ziemke, 2009; Vernon et al., 2015). The latter perspective is elaborated in more detail in Section 3.

But, before we get there, let us a have quick look at Chemero's (2009) characterization of the current embodied cognition research landscape, which is illustrated in Figure 1. Chemero points out that there currently are at least two very different positions/traditions that are both referred to as 'embodied cognitive science'. One of these, which Chemero refers to as radical embodied cognitive science, is grounded in the anti-representationalist and anti-computationalist traditions of eliminativism, American naturalism, and Gibsonian ecological psychology. The other, more mainstream version of embodied cognitive science, on the other hand, in line with what was referred to as robotic functionalism above, is derived from traditional representationalist and computationalist theoretical frameworks, and therefore also still is more or less compatible with these - as illustrated maybe most prominently by the notion of symbol/representation grounding, as opposed to the more radical position of anti-representationalism. As Chemero rightly points out, although - or maybe because - the mainstream version of embodied cognitive science can be considered a "watered-down" version of its more radical counterpart, it currently receives significantly more attention in the cognitive sciences.

The position of radical embodied cognition, according to Chemero (2009), can be summarized in two positive claims and one negative one:

1. Representational and computational views of embodied cognition are wrong.

2. Embodied cognition should be explained using a particular set of tools T, including dynamical systems theory.

3. The explanatory tools in set T do not posit mental representations.

To summarize the discussion so far, it should now be clearer why exactly it is still surprisingly difficult to pinpoint what embodied cognition is, or what kind of embodiment an artificial cognitive system might require. There are different positions along at least a couple of dimensions of embodiment: physicality, the view of representation, and the role of the underlying biology. Embodied AI researchers emphasize the importance of physical grounding, but in their research practice they commonly make use of software simulations, and the computer programs controlling their robots - physical or simulated - are for the most part still just as computational as the computer programs of traditional AI. Radical embodied cognitive science, at least according to Chemero, is strictly anti-representationalist, whereas mainstream embodied cognitive science more or less still embraces the traditional computationalist/representationalist framework, but emphasizes the need for representations to be grounded, i.e. a robotic functionalism instead of the traditional computational functionalism. Naturally, the role of the biological mechanisms underlying (embodied) cognition is also fundamentally different on the left and the right side of Chemero's diagram (cf. Figure 1). While on the right/mainstream side, the biological

embodiment of natural cognition would be considered as just one possible 'implementation', which could as well be replaced by alternative, e.g. computational and/or robotic implementations, the left/radical side is at least more open to the idea of the living body actually having some fundamental role in constituting embodied cognition. Which leads us to the next section.

3. Does Life Matter to Embodied Cognition?

As pointed out in the previous section, the nature, role, and conception of 'the body' are actually still far from well-defined in embodied cognitive science - and embodied AI in particular. As discussed in more detail elsewhere (Ziemke, 2000, 2004, 2007), on the one hand, much embodied AI research, in particular the widespread emphasis of the importance of physical embodiment (e.g. Brooks, 1991, 1993; Steels, 1994; Pfeifer, 1995; Pfeifer & Scheier, 1999), is actually to a high degree compatible with the view of robotic functionalism (Harnad, 1989), according to which embodiment is mainly about symbol/representation grounding (Harnad, 1990; cf. Anderson, 2003; Chrisley, 2003; Pezzulo et al., 2013), whereas cognition can still be conceived of as computation. On the other hand, much of the rhetoric in the embodied AI field, in particular early embodied AI researchers' rejection of traditional notions of representation and cognition as computation (e.g. Brooks, 1991; Pfeifer, 1995; Pfeifer & Scheier, 1999), suggests sympathy for more radical notions of embodied cognition that view all of cognition as embodied and/or rooted in the mechanisms of the living body (e.g. Maturana & Varela, 1987; Varela et al., 1991; Thompson, 2007; Johnson, 2007; Froese & Ziemke, 2009). More specifically, part of the problem with embodied AI is that, despite its strong biological inspiration it, early embodied AI research very much focused on establishing itself as a new paradigm within AI and cognitive science, i.e. as an alternative to the traditional functionalist/computationalist paradigm (e.g. Beer, 1995; Pfeifer, 1995; Pfeifer & Scheier, 1999). Less effort was made, on the other hand, to make the connection to other theories, outside AI, e.g. in theoretical biology, addressing issues of autonomy, embodiment, etc. Similarly, much embodied AI research distinguishes itself from its traditional AI counterpart in its interactive view of knowledge. For example, work on adaptive robotics, in particular evolutionary robotics (Nolfi & Floreano, 2000) and developmental/epigenetic robotics (e.g. Zlatev & Balkenius, 2001; Berthouze & Ziemke, 2003; Lungarella et al., 2003), is largely compatible with the constructivist/enactivist/interactivist view (e.g. Piaget, 1954; Varela et al., 1991; Bickhard, 1993, 2009; Ziemke, 2001a) of knowledge construction in sensorimotor interaction with the environment, with the goal of achieving some 'fit' or 'equilibrium' between internal, conceptual/behavior-generating mechanisms and the external environment (for a more detailed discussion of this aspect see Ziemke, 2001a).

However, the organic roots of these processes, which were emphasized in, for example, the theoretical biology of von Uexkull (1928, 1982) or Maturana and Varela's (1980, 1987) theory of autopoiesis (cf. below), are usually ignored in embodied AI, which for the most part still operates with a view of the body that is largely compatible with mechanistic theories in psychology and a view of control mechanisms that is still largely

compatible with computationalism (cf. Ziemke, 2000, 2001a). That means, the robot body is typically viewed as some kind of input- and output-device that provides physical grounding to the internal computational mechanisms. As we have seen above, this view of the physical body as the computational mind's sensorimotor interface to the world still pervades much of cognitive science and philosophy of mind. Thus, in practice, embodied AI as a result of its history and interdisciplinary influences, has become a theoretical hybrid, combining a mechanical/behaviorist view of the body with the constructivist notion of interactive knowledge, and the functionalist hardware-software distinction and its view of the activity of the nervous system as computational (cf. Ziemke, 2000, 2001, 2004, 2007).

As Greenspan and Baars (2004) have pointed out (cf. also Sharkey & Ziemke, 1998, 2001; Ziemke, 2001a), the mechanistic/reductionistic approach to biology and psychology of leading early 19th-century researchers like Loeb (1918) and Pavlov (1927) paved the way for the strong dominance of behaviorism in psychology, as pursued by Watson (1925) and Skinner (1938). Cognitive science, with its traditional emphasis on representation and computation, is widely considered to have replaced or overcome the overly mechanistic view of behaviorism. However, as Costall (2006) pointed out that, "it is not the case that mainstream cognitive psychology entirely replaced the traditional mechanistic model. It retains the old mechanistic image of the body. The new mechanism of mind has been merely assimilated to the old dualism of mind and body, along with the existing conception of the body as a passive machine". Although Costall's critique is directed mainly at traditional cognitive science and AI, rather than embodied AI, it should be noted that the mind/body or hardware/software dualism that he accuses modern psychology of is the exact same dualism that Searle (1980) accuses computationalist theories of. And, as discussed above, and Searle pointed out already back then in his robot reply, whether or not the computational mind resides in a slightly less passive robotic body, i.e. a physical/mechanical container that allows the computational mind to interact with its environment through sensors and actuators, really does not make much of a difference when it comes to such computational/robotic systems as models of human cognition, intentionality, etc.

So, what is the alternative then? We have already seen various glimpses of theoretical frameworks that emphasize the biological nature and/or the organismic roots of natural embodied cognition, but what exactly are these frameworks?

Let us start with the theory of autopoiesis (Varela et al., 1974; Maturana & Varela, 1980, 1987; Varela, 1979, 1997). According to Varela (1997, p. 75): "An autopoietic system -the minimal living organization - is one that continuously produces the components that specify it, while at the same time realizing it (the system) as a concrete unit in space and time, which makes the network of production of components possible. More precisely defined: An autopoietic system is organized (defined as a unity) as a network of processes of production (synthesis and destruction) of components such that these components: (i) continuously regenerate and realize the network that produces them, and (ii) constitute the system as a distinguishable unity in the domain in which they exist". Prime examples of autopoiesis are living cells and organisms, which have also been

referred to as "first-order" and "second-order autopoietic unities" respectively (e.g. Maturana & Varela, 1987).

Somewhat controversially, Maturana and Varela (1980, 1987) actually consider all living systems to be cognitive systems. Naturally, this has been criticized by a number of authors who wish to reserve the term 'cognitive' for higher-level psychological processes (cf., e.g., Barandiaran & Moreno, 2006). Varela, however, defended the use of the term 'cognitive' as follows: "The reader may balk at my use of the term cognitive for cellular systems. But from what I have said it should be clear that the constitution of a cognitive domain links organisms and their worlds in a way that is the very essence of intentionality as used in modern cognitive science, and as it was originally introduced in phenomenology. My proposal makes explicit the process through which intentionality arises: it amounts to an explicit hypothesis about how to transform the philosophical notion of intentionality into a principle of natural science. The use of the term cognitive here is thus justified because it is at the very base of how intentionality arises in nature" (Varela, 1997, pp. 80-81).

For a competing, though closely related theoretical framework, let us also have a quick look at Christensen and Hooker's (2000) theory of autonomy, aimed to propose a naturalistic theory of intelligent agency as an embodied feature of organized, typically living, dynamical systems. According to this view, agents are entities that engage in normatively constrained, goal-directed, interaction with their environment. More specifically, "[l]iving systems are a particular kind of cohesive system ... in which there are dynamical bonds amongst the elements of the system which individuate the system from its environment". Let us have a look at some examples: A gas has no internal cohesion, its shape and condition are imposed by the environment. A rock, on the other hand, has internal bonds and behaves as an integral whole. However, these cohesive bonds are passive and rigid (i.e. stable deep-energy-well interactions are constraining the constituents spatially), and they are local, (i.e. there are no essential constraints on the boundary of the system). A cell, finally, has cohesive bonds and acts as an integrated whole, but those bonds are active (i.e. chemical bonds formed by shallow-energy-well interactions and continually actively remade), flexible (i.e. interactions can vary, are sensitive to system and environmental changes), and holistic (i.e. binding forces depend on globally organized interactions; i.e. local processes must interact globally to ensure the cell's survival). Autonomous systems then, according to Christensen and Hooker (2000), are cohesive systems of the same general type as the cell. Their examples of autonomous systems include cells and organisms, as for autopoiesis, but also molecular catalytic bicycles, species, and colonies (for details see Christensen & Hooker, 2000). Regarding the differences between their theory of autonomy and the theory of autopoiesis, Christensen and Hooker (2000) state that the paradigm case of autopoiesis is the operationally closed system that produces all its components within itself, whereas their theory of autonomy emphasizes agent-environment interaction and a "directive organisation [that] induces pattern-formation of energy flows from the environmental milieu into system-constitutive processes". However, as Varela (1997:82) pointed out, in the theory of autopoiesis the term operational closure "is used in its mathematical sense of recursivity, and not in the sense of closedness or isolation from interaction, which would be, of course, nonsense".

What these theoretical frameworks share is the view that living organisms have a particular organization, and that they take this organization to be fundamental to natural cognition. This is also the case for Bickhard's (1993, 2009) notion of cognitive systems as recursively self-maintaining, far-from-thermodynamic-equilibrium systems. As Bickhard (personal communication) points out, current robots have "no intrinsic stake in the world nor in their existence in the world nor in their existence as social agents". Discussing the case of a robot that regularly recharges its battery, which is a common scenario in embodied AI research (e.g. Montebelli et al, 2013), Bickhard (2009) emphasizes that the "contrast with the biological case arises in the fact that most of the robot's body is not far-from-equilibrium, cannot be self-maintained, and certainly not recursively self-maintained. Conversely, the only part of the robot that is far from equilibrium, the battery, is not self-maintaining". Interestingly, despite all similarities in the above theoretical frameworks, while the theory of autopoiesis and the underlying biology of cognition adopt a strictly anti-representationalist view of cognition, in Bickhard's interactivist theory of mind, with which also Christensen and Hooker sympathize, so-called interactive representations play a crucial role. This indicates that Chemero's above illustration of the embodied cognition research landscape might not necessarily provide a complete picture, and there might be room for conceptions of cognition as a biological phenomenon that reject the traditional functionalist/computationalist view, and maybe also the traditional notion of representation, but without necessarily committing to the eliminativism/anti-representationalism that characterizes radical embodied cognition according to Chemero. We will get back to this point.

The above characterizations of cognitive systems as autonomous, self-maintaining, and to some degree self-producing, living systems of course have a number of historical precursors, such as the concept of autonomy in von Uexkull's (1928, 1982) theoretical biology and theory of meaning (cf. Ziemke, 2000, 2001a; Ziemke &Sharkey, 2001), or even much earlier than that, Spinoza's 17-th century concept of the conatus. Damasio (2003) summarized the latter as follows:

"It is apparent that the continuous attempt at achieving a state of positively regulated life is a deep and defining part of our existence - the first reality of our existence as Spinoza intuited when he described the relentless endeavour (conatus) of each being to preserve itself. ... Interpreted with the advantages of current hindsight, Spinoza's notion implies that the living organism is constructed so as to maintain the coherence of its structures and functions against numerous life-threatening odds. The conatus subsumes both the impetus for self-preservation in the face of danger and opportunities and the myriad actions of self-preservation that hold the parts of the body together. In spite of the transformations the body must undergo as it develops, renews its constituent, and ages, the conatus continues to form the same individual and respect the same structural design. " (Damasio, 2003, p. 36)

Damasio has criticized "the prevalent absence of a notion of organism in the sciences of

mind and brain" as a problem, which he elaborated as follows: ''It is not just that the mind remained linked to the brain in a rather equivocal relationship, but that the brain remained consistently separated from the body and thus not part of the deeply interwoven mesh of body and brain that defines a complex living organism" (Damasio, 1998, p. 84). His own theoretical framework, much in line with his above interpretation of Spinoza, is based on the view that "[nature has] built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it" (Damasio, 1994, p. 128). This view is shared by somatic theories of emotion and consciousness, including the work of Damasio (1998, 1999, 2003), Panksepp (2005), and Prinz (2004), as well their historical predecessors, James (1884) and Lange (1885). What these theories agree on, in a nutshell, is that emotions arise from multiple, nested levels of homeostatic regulation of bodily activity (cf. Figure 2), and that emotional feelings are feelings of such bodily changes (cf. Prinz, 2004). Panksepp (2005), with reference to Damasio's and his own work, has referred to somatic theories as "a multi-tiered affectively embodied view of mind" (Panksepp, 2005, p. 63).

Such somatic theories can be considered a biologically-based, but representational view of cognition, although with a non-traditional twist: here the representations are not body-internal representations of body- or agent-external objects or states of affairs, but rather the brain's representations of first and foremost bodily activity, but indirectly of course also the environment. According to Prinz, "emotions can represent core relational themes without explicitly describing them. Emotions track bodily states that reliably co-occur with important organism-environment relations, so emotions reliably co-occur with important organism-environment relations. Each emotion is both an internal body monitor and a detector of dangers, threats, losses, or other matters of concern. Emotions are gut reactions; they use our bodies to tell us how we are faring in the world" (Prinz, 2004 , p. 69). In a similar vein, Damasio has argued that the essence of feelings of emotion lies in the mapping of bodily emotional states in the body-sensing regions of the brain, such as somato-sensory cortex (Damasio, 1999, 2004). Such mental images of emotional bodily reactions are also crucial to Damasio's concept of the as-if body loop, a neural internal simulation mechanism (using the brain's body maps, while bypassing the actual body), whose cognitive function and adaptive value he elaborates as follows: "Whereas emotions provide an immediate reaction to certain challenges and opportunities ... [t]he adaptive value of feelings comes from amplifying the mental impact of a given situation and increasing the probabilities that comparable situations can be anticipated and planned for in the future so as to avert risks and take advantage of opportunities" (Damasio, 2004, pp. 56-57).

Since in the above discussion of somatic theories the notion of homeostatic bodily regulation has been mentioned, it should be noted that the term here is used in the wider sense, as used by Damasio and Carvalho (2013), i.e. the "process of maintaining the internal milieu physiological parameters (such as temperature, pH and nutrient levels) of a biological system within the range that facilitates survival and optimal function", rather than the narrower sense that emphasizes constancy. As discussed in more detail elsewhere (Vernon et al., 2015), of particular relevance to theories and models of embodied cognition is in fact the concept of predictive (self-) regulation or allostasis

(Sterling, 2004, 2012; Schulkin, 2011). In line with the notion of the as-if body loop discussed above, Sterling (2012) points out: "The brain monitors a very large number of external and internal parameters to anticipate changing needs, evaluate priorities, and prepare the organism to satisfy them before they lead to errors. The brain even anticipates its own local needs, increasing flow to certain regions — before there is an error signal". In a similar vein, Seth (2013) has argued that "an organism should maintain well-adapted predictive models of its own physical body ... and of its internal physiological condition" (Seth, 2013, p. 567).

To sum up this section, before we move to the next, there are a number of overlapping theoretical frameworks that take cognition to be a genuinely biological phenomenon occurring in living organisms, and therefore emphasize the fundamental role played by the living body in general, and mechanisms of homeostatic/allostatic self-regulation in particular, in natural embodied cognition in living organisms. This clearly goes beyond the somewhat mechanistic view of the physical body as the computational mind's sensorimotor interface to the world, which pervades much of mainstream cognitive science and in particular embodied AI. While the controversial issue of 'representation' naturally is too complex to discuss in detail - let alone resolve - in this paper, the above discussion hopefully illustrates at least to some degree the potential role and nature of non-traditional 'representations'1 - in the sense of predictive models - in biologically-based, non-functionalist conceptions of embodied cognition.

4. Discussion and Conclusion

As Black (2014) has recently pointed out, we "seem to have an innate propensity to see bodies wherever we look" (p. 16). This is due to the that fact we "consistently anthropomorphise machines, our attempts to conceptualise unfamiliar new artefacts falling back on the most fundamental and sophisticated frameworks for understanding animation we have - those related to the human body" (Black, 2014, p. 38). Hence, the question "What is a body?" or "What is embodied?" is actually rarely asked. In embodied AI research robots are usually considered as 'embodied' as a matter of fact, simply because, unlike most traditional AI systems, they are physical and can interact with their environment through sensors and actuators. The fact that robot bodies in most cases actually have very little in common with the bodies of living organisms is not given equally much attention.

As discussed in Section 2, most work in embodied cognitive science falls into the category Chemero (2009) refers to as mainstream embodied cognitive science, which still is more or less compatible with traditional computationalist and representationalist conceptions of cognition, which to some degree reduce the body to the computational mind's physical/sensorimotor interface to the world it represents internally. Radical embodied cognitive science rejects these traditional notions, but at least in Chemero's formulation of the main claims/tenets of radical embodied cognition, it also does not

1 Considering the emphasis on prediction, it might in fact be more accurate to think of them as pre-presentations, or simply pre-sentations.

emphasize the biological nature of embodied cognition as such, but rather focuses on explaining perception and action in dynamical-systems rather than representational terms. Accordingly, most research in embodied AI, of both the representationalist and the anti-representationalist type, has focused on sensorimotor interaction between agents and their physical and social environments, or, you might say, on grounding cognition in sensorimotor interaction. Naturally, physical robots and simulated robotic agents, are the tools of choice for this type of embodied AI.

From the perspective of the theories discussed in Section 3 though, embodied cognition is not only grounded in sensorimotor interaction with the environment, but at least in the case of natural cognition, that sensorimotor interaction with the environment is itself deeply rooted in the underlying biological mechanisms, and more specifically layered/nested networks of bodily self-regulation mechanisms. According to Damasio and others, the connection lies in emotional mechanisms playing a crucial in this self-regulation, fulfilling on the one hand a survival-related (bioregulatory, adaptive, homeostatic/allostatic) function, and, on the other hand, constituting the basis of higherlevel cognition, self and consciousness. Panksepp (2005) refers to this as "a multi-tiered affectively embodied view of mind", which clearly goes beyond the physical/sensorimotor embodiment that current robots are limited to.

If we adopt the latter view of embodied cognition as a first and foremost biological phenomenon, then clearly embodied AI is still lacking in complex models of such multilevel networks/hierarchies of self-regulation mechanisms - reaching from low-level bioregulatory mechanisms to higher levels of embodied emotion and cognition - and possibly the development of computational cognitive architectures for robotic systems taking into account some of these mechanisms (cf. Ziemke & Lowe, 2009; Lowe & Ziemke, 2011; Vernon et al., 2015). Accordingly, an important potential contribution of synthetic biology research to embodied AI, and the understanding of natural embodied cognition, then probably lies first and foremost in modeling/understanding/synthesizing the nature of organisms as such layered networks. This would be an important complement to current work in embodied AI and robotic cognitive architectures, much of which is predominantly concerned with layered architectures for dealing with the complexities of perceiving and acting in the world.

In closing, and at the risk of pointing out the obvious, it should also be noted that the two types of embodied AI discussed here, focusing on grounding embodied cognition in sensorimotor interaction, and on grounding sensorimotor interaction in bodily regulation, respectively, are of course highly complementary. This applies not only to embodied AI, but also to our understanding of embodied cognition in general. Johnson (2007), for example, described his own work on the embodiment of language, which also initially focused on understanding the grounding language in sensorimotor interaction, as follows:

"In retrospect I now see that the structural aspects of our bodily interactions with our environments upon which I was focusing were themselves dependent on even more submerged dimensions of bodily understanding. It was an important step to probe below concepts, propositions, and sentences into the sensorimotor processes by which we understand our world, but what is now needed is a far deeper exploration into the qualities, feelings, emotions, and bodily processes that make meaning possible." (Johnson, 2007, p. x)

Acknowledgements

This work was supported in part by the Knowledge Foundation, Stockholm, under SIDUS grant agreement no. 20140220 (AIR, "Action and intention recognition in human interaction with autonomous systems").

References

1. Anderson, M. (2003). Embodied Cognition: A Field Guide. Artificial Intelligence, 149(1), 91-130.

2. Barandiaran, X. & Moreno, A. (2006). On what makes certain dynamical systems cognitive: A minimally cognitive organization program. Adaptive Behavior, 14(2), 171185

3. Beer, R.D. (1995). A dynamical systems perspective on agent-environment interaction.

Artificial Intelligence, 72, 173-215.

4. Berthouze, L. & Ziemke, T. (eds.) (2003). Epigenetic robotics: Modelling Cognitive Development in Robotic Systems. Connection Science, 15(4), 147-150.

5. Bickhard, M.H. (1993). Representational Content in Humans and Machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285-333.

6. Bickhard, M.H. (2009). The Biological Foundations of Cognitive Science. New Ideas in Psychology, 27, 75-84.

7. Black, D. (2014). Embodiment and Mechanisation: Reciprocal Understanding of body and machine from the Renaissanc to the present. Farnham, UK: Ashgate.

8. Brooks, R.A. (1991) Intelligence Without Reason. In: Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91). San Mateo, CA: Morgan Kauffmann, pp. 569-595.

9. Brooks, R.A. (1993). The Engineering of Physical Grounding. In: Proceedings of The Fifteenth Annual Conference of the Cognitive Science Society (pp. 153-154), Boulder, Colorado, Lawrence Erlbaum Associates, Inc.

10. Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press

11. Chrisley, R. & Ziemke, T. (2002). Embodiment. In: Encyclopedia of Cognitive Science (pp. 1102-1108). London: Macmillan Publishers.

12. Chrisley, R. (2003). Embodied Artificial Intelligence. Artificial Intelligence, 149(1), 131150.

13. Christensen, W.D. & Hooker, C.A. (2000). Autonomy and the emergence of intelligence: Organised interactive Construction. Communication and Cognition - Artificial Intelligence, 17(3-4), 133-157.

14. Clark, A. (1997). Being There. Cambridge, MA: MIT Press.

15. Clark, A. (1999). An embodied cognitive science? Trends in Cognitive Science, 9, 345351.

16. Costall, A. (2006). Bringing the body back to life: James Gibson's ecology of agency. In: Ziemke, T., Zlatev, J. & Frank, R. (eds.) Body, Language and Mind. Volume 1: Embodiment Berlin: Mouton de Gruyter.

17. Damasio, A.R. (1994). Descartes' error. Vintage Books.

18. Damasio, A.R. (1998). Emotion in the perspective of an integrated nervous system. Brain Research Reviews, 26, 83-86.

19. Damasio, A.R. (1999). The Feeling of What Happens: Body, Emotion and the Making of Consciousness. London: Vintage.

20. Damasio, A.R. (2003). Looking for Spinoza: Joy, Sorrow and the Feeling Brain. Orlando, FL: Harcourt.

21. Damasio, A.R. (2004). Emotions and Feelings: A Neurobiological Perspective. In: Manstead, A., Frijda, N. & Fischer, A. (eds.) (2004) Feelings and Emotions - The Amsterdam Symposium. Cambridge University Press.

22. Damasio, A. & Carvalho, G.B. (2013). The nature of feelings: evolutionary and neurobiological origins. Nature Reviews. Neuroscience, 14, 143-152. doi: 10.1038/nrn3403

23. Dreyfus, H. (1979). What Computers Can't Do. Cambridge, MA: MIT Press.

24. Froese, T. & Ziemke, T. (2009). Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial Intelligence, 173, 466-500. doi: 10.1016/j.artint.2008.12.001

25. Gallagher, S. (2005). How the Body Shapes the Mind. Oxford: Oxford University Press.

26. Greenspan, R.J. & Baars, B.J. (2005). Consciousness eclipsed: Jacques Loeb, Ivan P. Pavlov, an the rise of reductionistic biology after 1900. Consciousness and Cognition, 24, 219-230.

27. Goldinger, S.D., Papesh, M.H., Barnhart, A.S., Hansen, W.A. & Hout, M.C. (2016). The poverty of embodied cognition. Psychonomic Bulletin & Review, 23:959-978. doi: 10.3758/s13423-015-0860-1

28. Harnad, S. (1989). Minds, machines and Searle, Journal of Experimental & Theoretical Artificial Intelligence, 1(1), 5-25.

29. Harnad, S (1990). The Symbol Grounding Problem. Physica D, 42, 335-346.

30. James, W. (1884). What is an emotion?Mind, 9, 188-205.

31. Johnson, M. (2007). The Meaning of the Body: Aesthetics of Human Understanding. Chicago: University of Chicago Press.

32. Lange, C.G. (1885). Om sindsbev^gelser - Et psyko-fysiologisk studie. Copenhagen: Jacob Lunds.

33. Lindblom, J. (2015). Embodied Social Cognition. Springer Verlag: Heidelberg, Germany.

34. Loeb, Jacques (1918). Forced movements, tropisms, and animal conduct. Philadelphia: Lippincott Company.

35. Lowe, R. & Ziemke, T. (2011). The feeling of action tendencies: On the emotional regulation of goal-directed behavior. Frontiers in Psychology, 2:346. doi: 10.3389/fpsyg.2011.00346

36. Lungarella M., Metta G., Pfeifer R. & Sandini G. (2003). Developmental robotics: a survey. Connection Science, 15(4), 151-190.

37. Killeen, P. (2016). The House of the Mind. Commentary on Goldinger et al. (2016) The poverty of embodied cognition. Available at: https://asu.academia.edu/PeterKilleen (5 August 2016).

38. Maturana, H. R. & Varela, F. J. (1980). Autopoiesis and Cognition. Dordrecht: Reidel.

39. Maturana, H. R. & Varela, F. J. (1987). The Tree of Knowledge - The Biological Roots of Human Understanding. Boston, MA: Shambhala.

40. Morse, A., Herrera, C., Clowes, R., Montebelli, A. & Ziemke, T. (2011). The role of robotic modeling in cognitive science. New Ideas in Psychology, 29(3), 312-324.

41. Montebelli A, Lowe R & Ziemke T (2013). Towards metabolic robotics: insights from modeling embodied cognition in a bio-mechatronic symbiont. Artificial Life, 19(3-4):299-315, 2013.

42. Nolfi, S. & Floreano, D. (2000). Evolutionary Robotics. Cambride, MA: MIT Press.

43. Panksepp, J. (2005). Affective consciousness: Core emotional feelings in animals and humans. Consciousness and Cognition, 14, 30-80.

44. Pavlov, Ivan P. (1927). Conditioned Reflexes. London: Oxford University Press.

45. Pfeifer, R. (1995). Cognition - Perspectives from autonomous agents. Robotics and Autonomous Systems, 15, 47-70.

46. Pfeifer, R. & Scheier, C. (1999). Understanding Intelligence. Cambridge, MA: MIT Press.

47. Piaget, Jean (1954). The Construction of Reality in the Child. New York: Basic Books. Originally appeared as Piaget (1937) La construction du réel chez l'enfant. Neuchâtel, Switzerland: Delachaux et Niestlé.

48. Pezzulo, G., Barsalou, L.W., Cangelosi, A., Fischer, M.H., McRae, K. & Spivey, M.J. (2013) Computational Grounded Cognition: a new alliance between grounded cognition and computational modeling. Front. Psychology 3:612. doi: 10.3389/fpsyg.2012.00612

49. Preston, J. & Bishop, M. (eds.) (2002). Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press, Oxford.

50. Schulkin, J. (2011). Social allostasis: anticipatory regulation of the internal milieu. Front. Evolut. Neurosci,. 2:111. doi: 10.3389/fnevo.2010.00111

51. Shapiro, L. (2010). Embodied Cognition. Routledge.

52. Sharkey, N.E. & Ziemke, T. (1998). A consideration of the biological and psychological foundations of autonomous robotics. Connection Science, 10(3-4), 361-391.

53. Sharkey, N.E. & Ziemke, T. (2001). Mechanistic vs. Phenomenal Embodiment: Can Robot Embodiment Lead to Strong AI? Cognitive Systems Research, 2(4), 251-262.

54. Skinner, B.F. (1938). The behavior of organisms. New York: Appleton-Century-Crofts.

55. Steels, L. (1994). The artificial life roots of artificial intelligence. Artificial Life, 1, 75110.

56. Stewart, J. (1996). Cognition = Life: Implications for higher-level cognition. Behavioral Processes, 35, 311-326.

57. Searle, J.R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3 (3), 417-457.

58. Seth, A.K. (2013). Interoceptive inference, emotion, and the embodied self. Trends in Cognitive Science, 17, 565-573. doi: 10.1016/j.tics.2013.09.007

59. Sterling, P. (2004). Principles of allostasis. In: J. Schulkin (ed.) Allostasis, Homeostasis, and the Costs of Adaptation, pp. 17-64. Cambridge: Cambridge University Press.

60. Sterling, P. (2012). Allostasis: a model of predictive regulation. Physiol. Behav. 106, 515. doi: 10.1016/j.physbeh.2011.06.004

61. Thompson, E. (2007). Mind in Life. Cambridge, MA: Harvard University Press.

62. Varela, F.J. (1979). Principles of Biological Autonomy. New York: Elsevier.

63. Varela, F.J. (1997). Patterns of Life: Intertwining Identity and Cognition. Brain and Cognition, 34, 72-87.

64. Varela, F.J., Maturana, H.R. & Uribe, R. (1974). Autopoiesis: the organization of living systems, its characterization and a model. Biosystems, 5, 187-196.

65. Varela, F.J., Thompson, E. & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. Cambridge, MA: MIT Press.

66. Vernon D., Lowe R., Thill S. & Ziemke T. (2015). Embodied cognition and circular causality: On the role of constitutive autonomy in the reciprocal coupling of perception and action. Frontiers in Psychology, 6:1660. doi: 10.3389/fpsyg.2015.01660.

67. von Uexküll, Jakob (1928). Theoretische Biologie. Berlin: Springer Verlag.

68. von Uexküll, J. (1982). The Theory of Meaning. Semiotica, 42(1), 25-82. Originally appeared as: von Uexküll, J. (1940). Bedeutungslehre. Leipzig: Verlag J.A. Barth.

69. Wilson, A.D. & Golonka, S. (2013). Embodied cognition is not what you think it is. Frontiers in Psychology, 4:58. doi: 10.3389/fpsyg.2013.00058

70. Ziemke, T. (1999). Rethinking grounding, in: A. Riegler, M. Peschl & A. von Stein (eds.)

Understanding Representation in the Cognitive Sciences, Plenum Press, New York, 1999.

71. Ziemke, T. (2000). Situated neuro-robotics and interactive cognition. Doctoral dissertation, University of Sheffield, UK.

72. Ziemke, T. (2001a). The Construction of 'Reality' in the Robot. Foundations of Science, 6(1), 163-233.

73. Ziemke, T. (2001b). Are Robots Embodied? In: Balkenius, C.; Zlatev, J.; Brezeal, C.; Dautenhahn, K. & Kozima, H. (eds.) Proceedings of the First International Workshop on Epigenetic Robotics: Modelling Cognitive Development in Robotic Systems (pp. 75-83). Lund University Cognitive Studies, vol. 85, Lund, Sweden.

74. Ziemke, T. (2003). What's that thing called embodiment? In: Alterman, R. & Kirsh, D. (eds.) Proceedings of the 25th Annual Conference of the Cognitive Science Society (pp. 1305-1310). Mahwah, NJ: Lawrence Erlbaum.

75. Ziemke, T. (2004). Embodied AI as Science: Models of Embodied Cognition, Embodied Models of Cognition, or Both? In: Iida, F., Pfeifer, R., Steels, L. & Kuniyoshi, Y. (eds.) Embodied Artificial Intelligence (pp. 27-36). Heidelberg: Springer.

76. Ziemke, T. (2007). What's life got to do with it? In: Chella, A. & Manzotti, R. (eds.) Artificial Consciousness (pp. 48-66). Exeter: Imprint Academic.

77. Ziemke, T. (2008). On the role of emotion in biological and robotic autonomy, BioSystems, 91, 401-408.

78. Ziemke, T. & Lowe, R. (2009). On the role of emotion in embodied cognitive architectures: From organisms to robots. Cognitive Computation, 1, 104-117. doi: 10.1007/s12559-009-9012-0

79. Ziemke, T. & Sharkey, N. E. (2001). A stroll through the worlds of robots and animals. Semiotica, 134(1-4), 701-746.

80. Ziemke, T. & Thill, S. (2014). Robots are not embodied! Conceptions of embodiment and their implications for social human-robot interaction. In: Seibt, Hakli & Norskov (2014) Sociable Robots and the Future of Social Relations (pp. 49—53). IOS Press, Amsterdam.

81. Ziemke, T., Zlatev, J. & Frank, R. (eds.) (2006). Body, Language and Mind. Volume 1: Embodiment. Berlin: Mouton de Gruyter.

82. Zlatev, J. (2001). The epigenesis of meaning in human beings, and possibly robots. Minds and Machines, 11(2), 155-195.

83. Zlatev, J. (2002). Meaning = Life (+ Culture): An outline of a unified biocultural theory of meaning. Evolution of Communication, 4, 253-296.

84. Zlatev, J. & Balkenius, C. (2001). Introduction: Why "epigenetic robotics? In: Balkenius, C.; Zlatev, J.; Brezeal, C.; Dautenhahn, K. & Kozima, H. (eds.) (2001). Proceedings of the First International Workshop on Epigenetic Robotics: Modelling Cognitive Development in Robotic Systems (pp. 1-4). Lund University Cognitive Studies, vol. 85, Lund, Sweden.

Figure 1: Current notions of embodied cognitive science and their historical roots. Adapted from Chemero (2009: 30).

Figure 2: Damasio's illustration of "levels of automated homeostatic regulation, from simple to complex", constituting what Panksepp (2005) called "a multi-tiered affectively embodied view of mind". Illustration from Ziemke (2008: 406); adapted from Damasio (2003: 32).