An important requirement for autonomous systems is the ability to detect and recover from exceptional situations such as failures in observations. In this paper we demonstrate how techniques for planning with sensing under uncertainty can play a major role in solving the problem of recovering from such situations. In this first step we concentrate on failures in perceptual anchoring, that is how to connect a symbol representing an object to the percepts of that object. We provide a classification of failures and present planning-based methods for recovering from them. We illustrate our approach by showing tests run on a mobile robot equipped with a color camera.
An autonomous robot using symbolic reasoning, sensing and acting in a real environment needs the ability to create and maintain the connection between symbols representing objects in the world and the corresponding perceptual representations given by its sensors. This connection has been named perceptual anchoring. In complex environments, anchoring is not always easy to establish: the situation may often be ambiguous as to which percept actually corresponds to a given symbol. In this paper, we extend perceptual anchoring to deal robustly with ambiguous situations by providing general methods for detecting them and recovering from them. We consider different kinds of ambiguous situations and present planning-based methods to recover from them. We illustrate our approach by showing experiments involving a mobile robot equipped with a color camera and an electronic nose.
We present a new approach for odour detection and recognition based on a so-called PEIS-Ecology: a network of gas sensors and a mobile robot are integrated in an intelligent environment. The environment can provide information regarding the location of potential odour sources, which is then relayed to a mobile robot equipped with an electronic nose. The robot can then perform a more thorough analysis of the odour character. This is a novel approach which alleviates some the challenges in mobile olfaction techniques by single and embedded mobile robots. The environment also provides contextual information which can be used to constrain the learning of odours, which is shown to improve classification performance.
Perceptual anchoring is the problem of creating and maintaining in time the connection between symbols and sensor data that refer to the same physical objects. This is one of the facets of the general problem of integrating symbolic and non-symbolic processes in an intelligent system. Gärdenfors' conceptual spaces provide a geometric treatment of knowledge which bridges the gap between the symbolic and sub-symbolic approaches. As such, they can be used for the study of the anchoring problem. In this paper, we propose a computational framework for anchoring based on conceptual spaces. Our framework exploits the geometric structure of conceptual spaces for many of the crucial tasks of anchoring, like matching percepts to symbolic descriptions or tracking the evolution of objects over time.
Early detection and adaptive support to changing individual needs related to ageing is an important challenge in today’s society. In this paper we present a system called GiraffPlus that aims at addressing such a challenge and is developed in an on-going European project. The system consists of a network of home sensors that can be automatically configured to collect data for a range of monitoring services; a semi-autonomous telepresence robot; a sophisticated context recognition system that can give high-level and long term interpretations of the collected data and respond to certain events; and personalized services delivered through adaptive user interfaces for primary users. The system performs a range of services including data collection and analysis of long term trends in behaviors and physiological parameters (e.g. relating to sleep or daily activity); warnings, alarms and reminders; and social interaction through the telepresence robot. The latter is based on the Giraff telepresence robot, which is already in place in a number of homes. Particular emphasis is put on user evaluation outside the laboratories. A distinctive aspect of the project is that the GiraffPlus system will be installed and evaluated in at least 15 homes of elderly people. The concept of “useworthiness” is central in order to assure that the GiraffPlus system provides services that are easy to use and worth using. In addition, by using existing and affordable components we strive to achieve a system that is affordable and close to commercialization.
An intelligent physical agent must incorporate motor and perceptual processes to interface with the physical world, and abstract cognitive processes to reason about the world and the options available. One crucial aspect of incorporating cognitive processes into a physically embedded reasoning system is the integration between the symbols used by the reasoning processes to denote physical objects, and the perceptual data corresponding to these objects. We treat this integration aspect by proposing a fuzzy computational theory of anchoring. Anchoring is the process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects. Modeling this process using fuzzy set-theoretic notions enables dealing with perceptual data that can be affected by uncertainty/imprecision and imprecise/vague linguistic descriptions of objects
Future robots will work in hospitals, elderly care centers, schools, and homes. Similarity with humans can facilitate interaction with a variety of users who don't have robotics expertise, so it make sense to take inspiration from humans when developing robots. However, humanlike appearance can also be deceiving, convincing users that robots can understand and do much more than they actually can. Developing a humanlike appearance must go hand in hand with increasing robots' cognitive, social, and perceptive capabilities. This installment of Trends & Controversies explores different aspects of human-inspired robots.
This paper describes a methodology for performing longitudinal evaluations when a social robotic telepresence system is deployed in realistic environments. This work is the core of an Ambient Assisted Living Project called ExCITE, Enabling Social Interaction Through Telepresence. The ExCITE project is geared towards an elderly audience and has as aim to increase social interaction among elderly, their family and healthcare services by using robotic telepresence. The robotic system used in the project is called the Giraff robot and over a three year period, prototypes of this platform are deployed at a number of test-sites in different European countries where user feedback is collected and fedback into the refinement of the prototype. In this paper, we discuss the methodology of ExCITE in particular relation to other methodologies for longitudinal evaluation. The paper also provides a discussion of the possible pitfalls and risks in performing longitudinal studies of this nature particularly as they relate to social robotic telepresence technologies.
This paper gives an overview of the research papers published in Symbol Grounding in the period from the beginning of the 21st century up 2012. The focus is in the use of symbol grounding for robotics and intelligent system. The review covers a number of subtopics, that include, physical symbol grounding, social symbol grounding, symbol grounding for vision systems, anchoring in robotic systems, and learning symbol grounding in software systems and robotics. This review is published in conjunction with a special issue on Symbol Grounding in the Künstliche Intelligenz Journal.
Anchoring is the problem of connecting, inside an artificial system, symbols and sensor data that refer to the same physical objects in the external world. This problem needs to be solved in any robotic system that incorporates a symbolic component. However, it is only recently that the anchoring problem has started to be addressed as a problem per se, and a few general solutions have begun to appear in the literature. This paper introduces the special issue on perceptual anchoring of the Robotics and Autonomous Systems journal. Our goal is to provide a general overview of the anchoring problem, and to highlight some of its subtle points
Anchoring is the process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects. Although this process must necessarily be present in any physically embedded system that includes a symbolic component (e.g., an autonomous robot), no systematic study of anchoring as a problem per se has been reported in the literature on intelligent systems. In this paper, we propose a domain-independent definition of the anchoring problem, and identify its three basic functionalities: find, reacquire, and track. We illustrate our definition on two systems operating in two different domains: an unmanned airborne vehicle for traffic surveillance; and a mobile robot for office navigation.
Intelligent agents embedded in physical environments need the ability to connect, or anchor, the symbols used to perform abstract reasoning to the physical entities which these symbols refer to. Anchoring must rely on perceptual data which is inherently affected by uncertainty. We propose an anchoring technique based on the use of fuzzy sets to represent uncertainty, and of degree of subset-hood to compute the partial match between signatures of objects. We show examples where we use this technique to allow a deliberative system to reason about the objects (cars) observed by a vision system embarked in an unmanned helicopter, in the framework of the WITAS project.
Anchoring is the process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects. This process must necessarily be present in any physically embedded system that includes a symbolic component, for instance, in an autonomous robot that uses a planner to generate strategic decisions. However, no systematic study of anchoring as a problem per se has been reported in the literature on intelligent systems. In this paper, we advocate for the need for a domain-independent framework to deal with the anchoring problem, and we report some initial steps in this direction. We illustrate our arguments and framework by showing experiments performed on a real mobile robot
Anchoring is the process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects. Although this process must necessarily be present in any symbolic reasoning system embedded in a physical environment (e.g., an autonomous robot), the systematic study of anchoring as a clearly separated problem is just in its initial phase. In this paper we focus on the use of symbols in actions and plans and the consequences this has for anchoring. In particular we introduce action properties and partial matching of objects descriptions. We also consider the use of indefinite references in the context of action. The use of our formalism is exemplified in a mobile robotic domain
Anchoring is the problem of how to connect, inside an artificial system, the symbol-level and signal-level representations of the same physical object. In most previous work on anchoring, symbol-level representations were meant to denote one specific object, like 'the red pen p22'. These are also called definite descriptions. In this paper, we study anchoring in the case of indefinite descriptions, like `a red pen x'. A key point of our study is that anchoring with an indefinite description involves, in general, the selection of one object among several perceived objects that satisfy that description. We analyze several strategies to perform object selection, and compare them with the problem of action selection in autonomous embedded agents.
A symbiotic robotic system consists of a robot, a human, and a (smart) environment that cooperate symbiotically to perform a task. While the possibility of cooperating with the smart environment and with humans might greatly simplify the execution of robotic tasks, it also presents new challenges that open new research directions. In this article, we claim that symbiotic robotic systems will constitute a dominant new paradigm in the future of autonomous robotics, and discuss the corresponding potential and scientific challenges. This article is part of a special issue on the Future of AI.
In settings where heterogenous robotic systems interact with humans, information from the environment must be systematically captured, organized and maintained in time. In this work, we propose a model for connecting perceptual information to semantic information in a multi-agent setting. In particular, we present semantic cooperative perceptual anchoring, that captures collectively acquired perceptual information and connects it to semantically expressed commonsense knowledge. We describe how we implemented the proposed model in a smart environment, using different modern perceptual and knowledge representation techniques. We present the results of the systemand investigate different scenarios in which we use the common sense together with perceptual knowledge, for communication, reasoning and exchange of information.
Ambient environments which integrate a number of sensing devices and actuators intended for use by human users need to be able to express knowledge about objects, their functions and their properties to assist in the performance of everyday tasks. For this to occur perceptual data must be grounded to symbolic information that in its turn can be used in the communication with the human. For symbolic information to be meaningful it should be part of a rich knowledge base that includes an ontology of concepts and common sense. In this work we present an integration between ResearchCyc and an anchoring framework that mediates the connection between the perceptual information in an intelligent home environment and the reasoning system. Through simple dialogues we validate how objects placed in the home environment are grounded by a network of sensors and made available to a larger KB where reasoning is exploited. This first integration work is a step towards integrating the richness of a KRR system developed over many years in isolation, with a physically embedded intelligent system.
In this paper we describe an implemented framework that integrates knowledge representation and reasoning in a symbiotic system. In such systems a number of heterogeneous sensors pervasively embedded in the environment, mobile robots and humans co-exist and communicate. In this work, the integration is mediated through perceptual anchoring, which creates and maintains the correspondences between the symbol system and the perceptual data that refer to the same physical object. The overall framework is evaluated using ResearchCyc as the knowledge representation and reasoning system, within the context of a physical testbed, which consists of a small apartment-like home.
We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.
The success of mobile robots, and particularly of those interfacing with humans in daily environments (e.g., assistant robots), relies on the ability to manipulate information beyond simple spatial relations. We are interested in semantic information, which gives meaning to spatial information like images or geometric maps. We present a multi-hierarchical approach to enable a mobile robot to acquire semantic information from its sensors, and to use it for navigation tasks. In our approach, the link between spatial and semantic information is established via anchoring. We show experiments on a real mobile robot that demonstrate its ability to use and infer new semantic information from its environment, improving its operation.
An autonomous robot using symbolic reasoning, sensing and acting in a real environment needs the ability to create and maintain the connection between symbols representing objects in the world and the corresponding perceptual representations given by its sensors. This connection has been named perceptual anchoring. In complex environments, anchoring is not always easy to establish: the situation may often be ambiguous as to which percept actually corresponds to a given symbol.
In this paper, we extend perceptual anchoring to deal robustly with ambiguous situations by providing general methods for detecting them and recovering from them. We consider different kinds of ambiguous situations. We also present methods to recover from these situations based onautomatically formulating them as conditional planning problems that then are solved by a planner.
We illustrate our approach by showing experiments involving a mobile robot equipped with a color camera and an electronic nose.
Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.
The field of mobile robotic telepresence for social communication is in rapid expansion and it is of interest to understand what promotes good interaction. In this paper, we present the results of an experiment where novice users were given a guided tour while maneuvering a mobile robotic telepresence system for the first time. In a previous study, it was found that subjective presence questionnaires and observations of spatial configurations based on Kendon’s F-formations were useful to evaluate quality of interaction in mobile robotic telepresence. In an effort to find more automatized methods to assess the quality of interaction, the study in this paper used the same measures with an addition of objective sociometric measures. Experimental results show that the quantitative analysis of the sociometric data correlates with a number of parameters gathered via qualitative analysis, e.g. different dimensions of presence and observed problems in maneuvering the robot. The implications of this form a basis upon which a methodology for measuring interaction quality can be obtained.
Mobile Robotic Telepresence (MRP) systems incorporate video conferencing equipment onto mobile robot devices which can be steered from remote locations. These systems, which are primarily used in the context of promoting social interaction between people, are becoming increasingly popular within certain application domains such as health care environments, independent living for the elderly and office environments. In this review, an overview of the various systems, application areas and challenges found in literature concerning mobile robotic telepresence is provided. The survey also proposes a set terminology for the field as there is currently a lack of standard terms for the different concepts related to MRP systems. Further, this review provides an outlook on the various research directions for developing and enhancing mobile robotic telepresence systems per se, as well as evaluating the interaction in laboratory and field settings. Finally, the survey outlines a number of design implications for the future of mobile robotic telepresence systems for social interaction.
In this paper we present data collected at a training session for health care personnel and alarm operators in steering a mobile social robotic telepresence robot for the first time. The purpose of the system is to be used as a communicative tool particularly when interacting with an elderly audience. The results are based on questionnaires which includes questions about experienced social and spatial presence from the Temple Presence Inventory as well as the Networked Minds Social Presence Inventory. Also investigated in this study is how intuitive the system was to use as well as how attentive the users were to what was going on in the environment. Over thirty healthcare personnel and alarm operators participated in the study and the overall results presented in the paper suggest that the two questionnaires are indeed suitable for use also in the social robotic telepresence domain fo rproviding indications on both social and spatial presence.
In this paper we focus on spatial formations when interacting via mobile robotic telepresence (MRP) systems. Previous research has found that those who used a MRP system to make a remote visit (pilot users) tended to use different spatial formations from what is typical in humanhuman interaction. In this paper, we present the results of a study where a pilot user interacted with ten elderly via a MRP system. Intentional deviations from known accepted spatial formations were made in order to study their effect on interaction quality from the local user perspective. Using a retrospective interviews technique, the elderly commented on the interaction and confirmed the importance of adhering to acceptable spatial configurations. The results show that there is a mismatch between pilot behavior and local user preference and that it is important to evaluate a MRP system from two perspectives, the pilot user’s and the local user’s .
This article presents the results from a video-based evaluation study of a social robotic telepresence solution for elderly. The evaluated system is a mobile teleoperated robot called Giraff that allows caregivers to virtually enter a home and conduct a natural visit just as if they were physically there. The evaluation focuses on the perspectives from primary healthcare organizations and collects the feedback from different categories of health professionals. The evaluation included 150 participants and yielded unexpected results with respect to the acceptance of the Giraff system. In particular, greater exposure to technology did not necessarily increase acceptance and large variances occurred between the categories of health professionals. In addition to outlining the results, this study provides a number of indications with respect to increasing acceptance for technology for elderly.
In this paper, we focus on spatial formations when interacting via mobile robotic telepresence (MRP) systems. Previous research has found that those who used a MRP system to make a remote visit (pilot users) tended to use different spatial formations from what is typical in human-human interaction. In this paper, we present the results of a study where a pilot user interacted with ten elderly via a MRP system. Intentional deviations from known accepted spatial formations were made in order to study their effect on interaction quality from the local user perspective. Using a retrospective interviews technique, the elderly commented on the interaction and confirmed the importance of adhering to acceptable spatial configurations. The results show that there is a mismatch between pilot user behaviour and local user preference and that it is important to evaluate a MRP system from two perspectives, the pilot user’s and the local user’s.
The field of mobile robotic telepresence for social communication is in rapid expansion and it is of interest to understand what promotes good interaction. In this paper, we present the results of an experiment where novice users working in health care were given a guided tour while maneuvering a mobile robotic telepresence system for the first time. In a previous study, it was found that subjective presence questionnaires and observations of spatial configurations based on Kendon’s F-formations were useful to evaluate quality of interaction in mobile robotic telepresence. In an effort to find more automated methods to assess the quality of interaction, the study in this paper used the same measures, with the addition of objective sociometric measures. Experimental results show that the quantitative analysis of the sociometric data correlates with a number of parameters gathered via qualitative analysis, e.g. different dimensions of presence and observed problems in maneuvering the robot.
Robotic telepresence offers a means to connect to a remote location via traditional telepresence with the added value of moving and actuating in that location. Recently, there has been a growing focus on the use of robotic telepresence to enhance social interaction among elderly. However for such technology to be accepted it is likely that the experienced presence when using such a system will be important. In this paper, we present results obtained from a training session with a robotic telepresence system when used for the first time by healthcare personnel. The study was quantitative and based on two standard questionnaires used for presence namely, the Temple Presence Inventory (TPI) and the Networked Minds Social Presence Intentory. The study showed that overall the sense of social richness as perceived by the users was high. The users also had a realistic feeling regarding their spatial presence.
The goal of our studies is to iteratively refine prototypes of the robot by involving end users in development cycles of the prototype throughout the project. The evaluations will be conducted with the aim of maximizing usability across geographic, demographic and cultural boundaries, as well as diverse home environments and user preferences and attitudes. The project ExCITE focuses on end users’ perspectives when using a robotic telepresence platform, the Giraff. The Giraff system consists of a tiltable screen and web camera mounted on a moveable robotic base that can be teleoperated. Our application area is elder care. We motivate the use of telepresence in elder care as a way to ensure safety, facilitate independent living and enhance social interaction.
Olfaction is a challenging new sensing modality for intelligent systems. With the emergence of electronic noses, it is now possible to detect and recognize a range of different odours for a variety of applications. In this work, we introduce a new application where electronic olfaction is used in cooperation with other types of sensors on a mobile robot in order to acquire the odour property of objects.We examine the problem of deciding when, how and where the electronic nose (e-nose) should be activated by planning for active perception and we consider the problem of integrating the information provided by the e-nose with both prior information and information from other sensors (e.g., vision). Experiments performed on a mobile robot equipped with an e-nose are presented.
An electronic nose is an intelligent sensing device that uses a gas sensor array of partial and overlapping selectivity along with a pattern-recognition component to distinguish between simple and complex odors. To date, researchers have used e-noses in many applications and domains, from the food industry to medical diagnosis. A next stage in e-nose development is to introduce artificial olfaction into integrated systems, so e-noses can work with other sensors on more complex platforms, such as on mobile robots or in intelligent environments. Here, the authors offer an overview of the more critical challenges in integrating this important sense into intelligent systems. They also discuss Pippi, a mobile robot that uses different sensing modalities (visual, sonar, tactile, and e-nose sensors) along with high-level processes (planners and symbolic reasoning) to accomplish several olfactory-related tasks.
In this paper, we explore the integration of an electronic nose and its odour discrimination functionalities into a multi-sensing robotic system which works over an extended period of time. The robot patrols an office environment, collecting odour samples of objects and performing user requested tasks. By considering an experimental platforms which operates over an extended period of time, a number of issues related to odour discrimination arise such as the drift in the sensor data, online learning of new odours, and the correct association of odour properties related to objects. In addition to an electronic nose our robotic system consists of other sensing modalities (vision and sonar), behaviour-based control and a high level symbolic planner.
In this work we introduce symbolic knowledge representation and reasoning capabilities to enrich perceptual anchoring. The idea that encompasses perceptual anchoring is the creation and maintenance of a connection between the symbolic and perceptual description that refer to the same object in the environment. In this work we further extend the symbolic layer by combining a knowledge representation and reasoning (KRR) system with the anchoring module to exploit a knowledge inference mechanisms. We implemented a prototype of this novel approach to explore through initial experimentation the advantages of integrating a symbolic knowledge system to the anchoring framework in the context of an intelligent home. Our results show that using the KRR we are better able to cope with ambiguities in the anchoring module through exploitation of human robot interaction.
Mobile olfactory robots can be used in a number of relevant application areas where a better understanding of agas distribution is needed, such as environmental monitoring and safety and security related fields. In this paper wepresent a method to integrate the classification of odours together with gas distribution mapping. The resulting odourmap is then correlated with the spatial information collected from a laser range scanner to form a combined map.Experiments are performed using a mobile robot in large and unmodified indoor and outdoor environments. Multipleodour sources are used and are identified using only transient information from the gas sensor response. The resultingmulti level map can be used as a intuitive representation of the collected odour data for a human user.
This paper provides a review of the most recent works in electronic noses used in the food industry. Focus is placed on the applications within food quality monitoring that is, meat, milk, fish, tea, coffee and wines. This paper demonstrates that there is a strong commonality between the different application area in terms of the sensors used and the data processing algorithms applied. Further, this paper provides a critical outlook on the developments needed in this field for transitioning from research platforms to industrial instruments applied in real contexts.
More and more work in the field of artificial olfaction considers the integration of olfaction onto robotic systems. An important part of this integration is providing the robot with the ability to discriminate between different odour substances. This work presents the integration of an electronic nose onto a complete robotic system with multi-sensing modalities. The ambition of this work is to illustrate how classification performance of odours can be improved through the exploitation of the mobility of a robot, as well as through the cooperation between a human user and an electronic perceptual system. These points are highlighted in the context of a online robotic system designed to perform different tasks which require the ability to discriminate between odour characters. The robotic system and experimental results are presented where these tasks are tested and evaluated.
This paper addresses the problem of enabling autonomous agents (e.g., robots) to carry out human oriented tasks using an electronic nose. The nose consists of a combination of passive gas sensors with different selectivity, the outputs of which are fused together with an artificial neural network in order to recognize various human-determined odors. The basic idea is to ground human-provided linguistic descriptions of these odors in the actual sensory perceptions of the nose through a process of supervised learning. Analogous to the human nose, the paper explains a method by which an electronic nose can be used for substance identification. First, the receptors of the nose are exposed to a substance by means of inhalation with an electric pump. Then a chemical reaction takes place in the gas sensors over a period of time and an artificial neural network processes the resulting sensor patterns. This network was trained to recognize a basic set of pure substances such as vanilla, lavender and yogurt under controlled laboratory conditions. The complete system was then validated through a series of experiments on various combinations of the basic substances. First, we showed that the nose was able to consistently recognize unseen samples of the same substances on which it had been trained. In addition, we presented some first results where the nose was tested on novel combinations of substances on which it had not been trained by combining the learned descriptions - for example, it could distinguish lavender yogurt as a combination of lavender and yogurt.
Olfaction is a challenging new sensing modality for intelligent systems. With the emergence of electronic noses (e-noses) it is now possible to train a system to detect and recognise a range of different odours. In this work, we integrate the electronic nose on a multi-sensing mobile robotic platform. We plan for perceptual actions and examine when, how and where the e-nose should be activated.
Finally, experiments are performed on a mobile robot equipped with an e-nose together with a variety of sensors and used for object detection.