Scholarly article on topic 'Towards a Semantic Theory of Robotic Non-cooperation'

Towards a Semantic Theory of Robotic Non-cooperation Academic research paper on "Materials engineering"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia Computer Science
OECD Field of science
Keywords
{"robotic cooperation" / "robotic non-cooperation" / "ontological semantic technology"}

Abstract of research paper on Materials engineering, author of scientific article — Victor Raskin, Julia Taylor Rayz

Abstract This paper explores the notion of non-cooperation by a robot or an intelligent agent within a HARMS hybrid-team project. There is surprisingly little research on this topic which should actually acquire a considerable urgency in the light of cyber security and threats. To understand non-cooperation, one has to define cooperation more substantively than it has been done. We approach non-cooperation on the basis of a rule-based, non-statistical, semantical approach, where meaning is viewed comprehensively, approximating human comprehension, at a practical grain size that humans used to communicate, rather than selectively on the basis of what is easy to formalize. To the extent the reader is new to or familiar with semantics, the paper can be read as a position paper or an implementation outline because, of course, most position papers are a little bit of both.

Academic research paper on topic "Towards a Semantic Theory of Robotic Non-cooperation"

(8)

CrossMark

Available online at www.sciencedirect.com

ScienceDirect

Procedía Computer Science 94 (2016) 392 - 397

The 2nd International Workshop on Communication for Humans, Agents, Robots, Machines and Sensors (CHARMS 2016)

Towards a Semantic Theory of Robotic Non-Cooperation

Victor Raskin**

Linguistics, CERIAS, CS, CIT, Purdue University, W. Lafayette, IN47907, USA

Julia Taylor Rayz

CIT, CERIAS, Purdue University, W. Lafayette, IN 47907, USA

Abstract

This paper explores the notion of non-cooperation by a robot or an intelligent agent within a HARMS hybrid-team project. There is surprisingly little research on this topic which should actually acquire a considerable urgency in the light of cyber security and threats. To understand non-cooperation, one has to define cooperation more substantively than it has been done. We approach non-cooperation on the basis of a rule-based, non-statistical, semantical approach, where meaning is viewed comprehensively, approximating human comprehension, at a practical grain size that humans used to communicate, rather than selectively on the basis of what is easy to formalize. To the extent the reader is new to or familiar with semantics, the paper can be read as a position paper or an implementation outline because, of course, most position papers are a little bit of both.

© 2016 Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license

(http://creativecommons.Org/licenses/by-nc-nd/4.0/).

Peer-reviewunderresponsibility of the ConferenceProgramChairs

Keywords: robotic cooperation; robotic non-cooperation, ontological semantic technology

1. Introduction

This paper is based on a position that robotics in general and cooperative robotics/cooperative intelligent agent systems cannot exist without an assumption that not all players play nicely with each other. To this extend we export

Corresponding author. Tel.: =1-765-409-0675, fax: +1-765-494-3780 Email address: vraskin@purdue.edu

1877-0509 © 2016 Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license

(http://creativecommons.Org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of the Conference Program Chairs

doi:10.1016/j.procs.2016.08.060

our knowledge of broad-based information security into this field of non-cooperative robotics. Our research in robotic intelligence emerged, as a first experiment in porting our ontological semantic approach to computational semantics into this new area, a few years after our transformative venture of natural language information assurance and security1. The fit with robotic intelligence turned out to be near-perfect because of the immediate connection to Eric Matson's M2M Lab's hybrid-team effort resulting in HARMS: a grand theoretical schema of projects involving humans, (intelligent) agents/avatars, robots, machines, and sensors, interacting (and this was our contribution to the 2011 Summer School in Humanoid Robotics at Purdue) in natural language. A number of projects have been and are being implemented in HARMS as the ongoing research continues, with the first graduates flowing into industry and academia. Since 2011, we have published together and separately, with and without other student and faculty collaborators, on the entire HARMS enterprise, its semantic aspect as related to "something very much like consciousness," and most recently on the semantic implementation2,3,4,5,6.

Throughout the HARMS work, it was premised on the principle of intelligent cooperation among all autonomous agents, often to the point of indistinguishability between human and non-human contributions to HARMS communications: "who said this, Bob or DXYZ623-bis?" In those earlier publications, we looked at the basic research work on principles of cooperation, such as7,8910,11,12,13,14,15, and discovered that most of them were published decades ago, focusing on humans as agents. The few ventures into computational agents tended to be tentative and aspiring towards first-order-logic based formalizations, which—often elegant as they are—remain quite far from substantive implementations16,17,18,19,20. We need to go further in for both theoretical and practical concerns: we want to be able to assess and to reconfirm cooperation as a form of control and supervision as well as providing security. We have always been concerned with robotic security and found most robotics to be rather remarkably unconcerned with it, making it very easy to hack the process of communication. To detect non-cooperation, we must be able to tell it from cooperation, and that requires a deeper and more substantive, instrumental, methodological, and technological notion of it.

2. Computer/Robotic Cooperation and Non-Cooperation

My laptop and my printer can be unnecessarily described as cooperating when I press the Print button with my cursor or send the Print command from the command line. And if I did not do it right or there is a bug in the software or the US national grid is down, there will be no printout. The non-cooperation is pretty easy to discern then. It is just as simple to detect human non-cooperation when the work is done from a seat at the assembly line: a worker hanging out by the coke machine may be not tightening up that particular screw at the car chassis. A non-moving robotic arm, programmed for one operation, is not functioning either.

What is interesting and significant to discuss is intentional cooperation among autonomous agents, each capable of performing several operations and deciding when and where to perform it. Humans do it because of their natural or coerced intention to achieve a certain result. Computers and robots use a code to simulate those intentions. The task corresponds to a certain activity. In a non-homogeneous environment, if implemented by several agents it is divided by several of them each contributing a part corresponding to their skills or functionalities. The first time around, the obligations can be divided by the supervisor but, later on, a habit may make the discussion of who does what when unnecessary. A rigidly divided procedure, without any substitutions or redundancies, is trivially cooperative, and it is very hard to sabotage it unnoticeably. It is, therefore, important to establish cooperation or lack thereof when it is not trivial.

In a less rigorous, human-like system, with options and substitutions, an agent may perform its legitimate function out of turn or when it is not the nearest to the task. It may do it too often, at the expense of something else. It may systematically opt to do what it is less well qualified rather than the unique operation it performs best. Such a system is characterized by the following properties:

• It has more than one autonomous agent

• It has at least one but typically several multifunctional and mutually replaceable agents

• The procedure is implemented optimally but not necessarily in any rigidly algorithmic way

The options, such as which agent gets to perform an operation may be decided contextually on the basis of several, possibly hierarchically arranged parameters, such as which agent does it better and/or faster. Which is the nearest to it, which has more charge left, etc.

Cooperation of this class of complexity is worth researching and programming. Non-cooperation of this complexity is a significant to detect often even for humans, and it is definitely worth figuring it out computationally. In fact, among humans, it often amounts to the hardest forensic task to resolve, that of inside treason. In the next two sections, we will sketch out two solutions paths for it, one for the current stage of the resources in ontological semantic technology and the other for the immediate future which is already pressing on the entire task and industry of natural language processing applications, including, of course, cybersecurity.

3. Cognition Represented

This paper, as all of our work, is a cognitive enterprise. We need to understand the nature, substance of a phenomena, analyze it structurally, express this knowledge in machine language, and to model it computationally. We know and respect statistics, so we understand that this cannot be done statistically: some clear soup may be consumed with the help of a fork but no matter how fine the fork is or what large numbers are involved in cycling it, this is not the right instrument for eating soup, and most of it will escape capturing.. This might be the most important reason why semantics has not really entered the world of robotics.

Let us assume, for one long paragraph, a total naivete about semantics on the part of the current audience (the others, please skip to the next paragraph). Semantics deals with meaning. Meaning reflects our knowledge of the word. Language is a system of labeling and interrelating elements of that knowledge. In what is misleadingly termed semantics in most current natural language processing, some labels are manually and randomly assigned to a few select items, and this is added to a reasonably standard machine-learning enterprise. The cognitive gain is small to insignificant. What is missing is something that panicked and paralyzed the early transformational generative semanticists back in the 1960s21, namely, a nascent realization that to represent the meaning of every sentence one must represent our entire knowledge of the world. And yet, it is inescapable. Rather than panicking and trying to dance around the task, partially, taxonomically, or statistically, the challenge is to devise a reliable and feasible way of doing it. We have preached and practiced the ontological approach, creating, acquiring, discovering a property-rich ontology where both events and objects are linked together via a few hundred properties.

The acquisition and testing are supported by a computational environment which guides a human acquirer into answering well-formulated questions on the acquirer's prominent area of competence, his or her understanding of what words and sentences mean. The environment captures, structuralizes, formalizes, and computes human

22 23 24 25

intuition. This is a perfect tool for the cognition soup ,,, . Cognition expresses knowledge in conceptual terms, and the concepts are defined with mathematical precision as nodes in a logical lattice, thus superseding the seductiveness of the simplistic first-order, or description logic.

The availability of a well-developed language-independent ontology, with several thousand nodes and an easy possibility for expansion, as well as the language-specific (e.g., English) lexicon, where every sense of every word is defined via ontological concepts and their properties, already allows:

• interpret every English sentence ontologically, and additionally,

• express every piece of knowledge in the same conceptual terms independently of the mode in which it is expressed.

4. Goal Congruency as an Approximation of Result-Achieving Intention

We are now ready to get into the business of solving the cognitive task in hand. How do we determine that a robot R or an agent A are non-cooperative? How do we determine that a human member of an all-human team is not cooperative. We observe and interpret his or her behavior, review its results, and relate it to the whole project. To perceive and to assess, we must acquire a cognitive ability to represent information, and the ontological semantic technology (OST) presents us with the resources and software to do that.

The next question, which is central to the technology, is what detectable semantic resources need to be interpreted to achieve the determination. If a joint project for a group of humans is a day flight from New York to Boston, then each member of the group must first show up at a New York airport, which may not be the same but must have a flight to Boston, possibly arriving to Logan roughly at the same time. An ontological semantic text meaning representation (TMR) of the New York airport arrival can be conceptualized as text meaning representation of an English sentence such as Bob arrived to JFK at 7:30 am on Wednesday, or

arrive

agent Bob

instrument automobile, subway, railroad (, bicycle, feet)

source home, hotel

time 9 am

date Wednesday

If Bob fails to arrive there at that time he will miss the flight to Boston and, therefore, will not be cooperative. However, we can determine that only if we possess the knowledge of what is supposed to happen next, namely, boarding the plane to Boston. In plain words, we must know the final or at least a subsequent goal in order to figure out if a preliminary goal is met, such that if it is not the subsequent goal is defeated. Goal is one of the properties in the OST ontology, and a more complete TMR of the situation is, then:

agent Bob

instrument

source

automobile, subway, railroad (, bicycle, feet) home, hotel 9 am

Wednesday fly

agent instrument

source destination goal duration

airplane

NY airport

Boston airport

Clearly, the TMR may be easily extended to include every participant as agent, complete the itinerary in Boston and, similarly, to arrival to specify the departure back to New York and return to the original source, such as home. Allowing for all variations here, depending on individual circumstances, we observe one constant feature, namely, the coordination of goals among arrival, departure and everything in between. The failure to achieve any preliminary goal disrupts the procedure. This is our definition of non-cooperation (as well as that of cooperation: avoiding any such failure). Let us define goal congruency thusly:

congruent (goal1, goal2): achieve(goal2) iff achieve(goal1)

The relationship is clearly transitive, and in a complex project, there may be a number of long congruent sequences of goals, all meeting in the final goal, so it is all right to see that graph as a goal tree. The graph nature of the situation will offer additional mathematical advantages. Ontologically, one can think of various ways of achieving goals and diverse congruency relations but the principle remains the same: making sure, computationally, that every early goal is achieved to determine if an agent is cooperative. The determination of non-cooperation may be the final step in this procedure but it is only the beginning of a forensic investigation but the principle of the approach remains the same: relying on a certain ontological property to determine cooperation or lack thereof, and this is achievable at the current stage of OST implementation.

5. Using Ontological Scripts to Assess Robotic Cooperation or Non-Cooperation

Note that the congruent goals in the tour trip examples are arranged temporally: the previous goal needs to be achieved in order for the process with the later goal to start. In other words, a passenger must arrive to the airport before boarding a plane. There are many other processes inside projects that are organized the same way. But there are also many projects that are divided among its agents differently because agents may work on different parts of the project and/or specialize according to their functionalities. An agent, including a robot, may assemble parts of a house or another constructed object; he/she/it may transport parts from place to place; it may paint a part before assembling it, etc.

The way human planners supervise projects is dividing them into component parts and assigning the duties to different (groups of) agents. This brings up the underexplored notion of script. Introduced decades ago and

26 27 28

somewhat explored in psychology and business, it was successfully applied to the study of verbal humor , , but has not yet been sufficiently tightened up and formalized for computational usage. A script is an ordered and/or logically connected set of events, such that each component contributes to the whole script and may be used to evoke it. Thus, in a bankruptcy29, a missed payroll or an urgent high-interest loan application from a shark may strongly imply an insufficiency of funds: in fact, large corporations still hire a large number of expensive human efforts to determine, on a daily basis, the financial health of their hundreds or thousands of partners, such as supplier, buyers, and service providers.

Various household routines are scripted, such as meals, outings, games, etc. In each script, the goals are reasonably clearly determined for each component, and the congruent goal pairs are structured into sequences accordingly. Cooperation is still governed by the congruency relationship above but it is harder to determine that in a simple temporal set. Now, at the current stage of robotics, many robots have very limited functionalities and seem to inhabit very small domains. We have actually illustrated elsewhere that the "primitive" nature of ontologies for such robots is an illusion because the general component of the ontology is always added to the limited functionalities. And yet, the establishment of goal congruency, established by the limited number of scripts, still makes the task of determining cooperation and detecting non-cooperation easier.

The optimal way of combining script information with the OST ontology has yet to be determined. It is clear that the knowledge of scripts is an essential part of our knowledge of the world, and hence the computing devices and agents must have access to scripts all the time. Whether scripts should be somehow superimposed on the OST nodes or kept in a separate resource may be a matter of convenience, and yet it should be developed and organized pretty soon for real progress in computerizing cognitive tasks to take place.

6. Conclusion and Future Work

We are planning to work on further clarification and enrichment for the processes of cooperation and of detecting non-cooperation as the CHARMS initiative and robotic intelligence continue to involve. We are particularly hopeful about the ongoing effort on defining and applying scripts in the study of reasoning.

References

1. Raskin V, Taylor JM.A Fresh Look at Semantic Natural Language Information Assurance and Security: NL-IAS From Watermarking and Downgrading to Discovering Unintended Inferences and on to Situational Conceptual Defaults, in: B. Akhgar and H. R. Arabnia (eds.), Emerging Trends in Information and Communication Technologies Security, Amsterdam: Elsevier (Morgan Kaufmann), 2013.

2. Matson ET, Taylor JM, Raskin V, Min B-C, Wilson EC. A Natural Language Model for Enabling Human, Agent, Robot and Machine Interaction, The 5th IEEE International Conference on Automation, Robotics and Applications, Wellington, New Zealand, 2011.

3. Raskin V, Taylor JM, Matson ET. Towards an Ontological Modelling of Something Very Much Like Consciousness: The Harms Way, Society for Design and Process Science Conference, Berlin, Germany, 2012.

4. Raskin V, Taylor JM. Comprehensive Semantics in Robotic Intelligence and Communication: Necessity and Feasibility, ICARA 2015, Queenstown, NZ, 2015.

5. Taylor JM. Mapping human understanding to robotic perception, CHARMS 2015 at 12h International Conference on Mobile Systems and Pervasive Computing (MobiSCP 2015), Belfort, France, 2015

6. Raskin V. Theory, Methodology, and Implementation of Robotic Intelligence and Communication, CHARMS 2015 at 12th International Conference on Mobile Systems and Pervasive Computing (MobiSCP 2015), Belfort, France, 2015

7. Rao S, Georgeff MP. Modeling Rational Agents within a BDI-Architecture. International Conference on Principles oof Knowledge Representation and Reasoning, 1991, p. 473-484,.

8. Rao S, Georgeff. MP. BDI-agents: From Theory to Practice. International Conference on Multiagent Systems (ICMAS'95), San Francisco, 1995.

9. Bratman ME. Intention, Plans, and Practical Reason. CSLI Publications, 1987/99.

10. Wooldridge M. Reasoning About Rational Agents. Cambridge, MA: MIT Press, 2000. 11.. Cohen PR, Levesque HJ. Confirmation and Joint Action, IJCAI, 1991.

12. Cohen PR, and Levesque HJ. Teamwork, Nous 25 (4), 1991, p. 487-512.

13. Levesque HJ, Cohen PR, Nunes J. On acting together. Proceedings of the National Conference on Artificial Intelligence, 1990.

14. Grosz B. Collaborating systems. Artificial Intelligence Magazine 17 (2), 1996, p. 67-85.

15. Grosz B, Kraus S. Collaborative plans for complex group actions. Artificial Intelligence 86, 1996, p. 269-368.

16. Vikhorev KS, Alechina N, Logan B. The ARTS Real-Time Agent Architecture. Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). CEUR Workshop Proceedings, Vol. 494, Turin, Italy, 2009.

17. Tambe, M. Towards flexible teamwork, Journal of Artificial Intelligence Research 7, 1997, p. 83-124.

18. Pynadath DV, Tambe M, Chauvat N, Cavedon ., Toward team-oriented programming. In: Jennings NR, Lesperance Y, editors. Intelligent Agents VI: Agent Theories, Architectures and Languages, Berlin: Springer-Verlag, 1999, p. 233-247.

19. Yen J, Yin J, Ioerger TR, Miller MS, Xu D, Volz R. CAST: Collaborative Agents for Simulating Teamwork. IJCAI, 2001, p. 1135-1142.

20. Pynadath DV, Tambe M. The communicative multiagent team decision problem. Journal of Artificial Intelligence Research 16, 2002, p. 389423.

21. Katz JJ, Fodor JA. The Structure of a Semantic Theory, Language 39, 1963.

22. Nirenburg S, Raskin V. Ontological Semantics. Cambridge, MA: MIT Press, 2004.

23. Raskin V, Hempelmann CF, Taylor JM. Guessing vs. knowing: The two approaches to semantics in natural language processing. Annual International Artificial Intelligence Conference Dialogue 2010, Moscow, Russia, 2010, p. 645-652.

24. Taylor JM, Hempelmann CF, Raskin V. On an automatic acquisition toolbox for ontologies and lexicons in ontological semantics.

International Conference on Artificial Intelligence, Las Vegas, NE, 2010, p. 863-869.

25. Taylor JM, Raskin V, Hempelmann CF. From disambiguation failures to common-sense knowledge acquisition: A day in the life of an Ontological Semantic System. Web Iintelligence Conference, Lyon, France, 2011.

26. Raskin V. Semantic Mechanisms of Humor, Dordrecht: D. Reidel, 1985.

27. Raskin V. On Algorithmic Discovery and Computational Implementation of the Opposing Scripts Forming a Joke, HCI2015: International Conference on Human Computer Interaction, Los Angeles, CA, 2015

28. Raskin V. Algorithmization of Script Detection in Verbal Jokes, HCI 2016, 2016.

29. Raskin V, Nirenburg S., Nirenburg I, Hempelmann CF, Triezenberg KE. The genesis of a script for bankruptcy in ontological semantics . In: Hirst G, Nirenburg S, editors. Proceedings of the Text Meaning Workshop, HLT/NAACL 2003: Human Language Technology and North American Chapter of the Association of Computational Linguistics Conference. ACL: Edmonton, Alberta, Canada, 2003