Reliable autonomous navigation is still a challenging problem for robots with simple and inexpensive hardware. A key difficulty is the need to maintain an internal map of the environment and an accurate estimate of the robot’s position in this map. Recently, a stigmergic approach has been proposed in which a navigation map is stored into the environment, on a grid of RFID tags, and robots use it to optimally reach predefined goal points without the need for internal maps. While effective,this approach is limited to a predefined set of goal points. In this paper, we extend this approach to enable robots to travel to any point on the RFID floor, even if it was not previously identified as a goal location, as well as to keep a safe distance from any given critical location. Our approach produces safe, repeatable and quasi-optimal trajectories without the use of internal maps, self localization, or path planning. We report experiments run in a real apartment equipped with an RFID floor, in which a service robot either reaches or avoids a user who wears slippers equipped with an RFID tag reader.
A team of mobile robots moving in a shared area raises the problem of safe and autonomous navigation. While avoiding static and dynamic obstacles, mobile robots in a team can lead to complicated and irregular movements. Local reactive approaches are used to deal with situations where robots are moving in dynamic environment; these approaches help in safe navigation of robots but do not give optimal solution. In this work a 2-D navigation strategy is implemented, where a potential field method is used for obstacle avoidance. This potential field method is improved using fuzzy rules, traffic rules and market based optimization (MBO). Fuzzy rules are used to deform repulsive potential fields in the vicinity of obstacles. Traffic rules are used to deal situations where two robots are crossing each other. Market based optimization (MBO) is used to strengthen or weaken repulsive potential fields generated by other robots based on their importance. For the verification of this strategy on more realistic vehicles this navigation strategy is implemented and tested in simulation. Issues while implementing this method and limitations of this navigation strategy are also discussed. Extensive experiments are performed to examine the validity of MBO navigation strategy over traditional potential field (PF) method.
This paper presents a method for localisation in hybrid metric-topological maps built using only local information that is, only measurements that were captured by the robot when it was in a nearby location. The motivation is that observations are typically range and viewpoint dependent and that a map a discrete map representation might not be able to explain the full structure within a voxel. The localisation system uses a method to select submap based on how frequently and where from each submap was updated. This allow the system to select the most descriptive submap, thereby improving the localisation and increasing performance by up to 40%.
This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.
Decentralization, immutability and transparency make of Blockchain one of the most innovative technology of recent years. This paper presents an overview of solutions based on Blockchain technology for multi-agent robotic systems, and provide an analysis and classification of this emerging field. The reasons for implementing Blockchain in a multi-robot network may be to increase the interaction efficiency between agents by providing more trusted information exchange, reaching a consensus in trustless conditions, assessing robot productivity or detecting performance problems, identifying intruders, allocating plans and tasks, deploying distributed solutions and joint missions. Blockchain-based applications are discussed to demonstrate how distributed ledger can be used to extend the number of research platforms and libraries for multi-agent robotic systems.
Internet of Things (IoT) and robotics cannot be considered two separate domains these days. Internet of Robotics Things (IoRT) is a concept that has been recently introduced to describe the integration of robotics technologies in IoT scenarios. As a consequence, these two research fields have started interacting, and thus linking research communities. In this paper we intend to make further steps in joining the two communities and broaden the discussion on the development of this interdisciplinary field. The paper provides an overview, analysis and challenges of possible solutions for the Internet of Robotic Things, discussing the issues of the IoRT architecture, the integration of smart spaces and robotic applications.
This research aims to develop an autonomous social robot for elderly individuals. The robot will learn from the interaction and change its behaviors in order to enhance the interaction and improve the user experience. For this purpose, we aim to use Deep Reinforcement Learning. The robot will observe the user’s verbal and nonverbal social cues by using its camera and microphone, the reward will be positive valence and engagement of the user.
Safety in human-robot interaction can be divided into physical safety and perceived safety, where the later is still under-addressed in the literature. Investigating perceived safety in human-robot interaction requires a multidisciplinary perspective. Indeed, perceived safety is often considered as being associated with several common factors studied in other disciplines, i.e., comfort, predictability, sense of control, and trust. In this paper, we investigated the relationship between these factors and perceived safety in human-robot interaction using subjective and objective measures. We conducted a two-by-five mixed-subjects design experiment. There were two between-subjects conditions: the faulty robot was experienced at the beginning or the end of the interaction. The five within-subjects conditions correspond to (1) baseline, and the manipulations of robot behaviors to stimulate: (2) discomfort, (3) decreased perceived safety, (4) decreased sense of control and (5) distrust. The idea of triggering a deprivation of these factors was motivated by the definition of safety in the literature where safety is often defined by the absence of it. Twenty-seven young adult participants took part in the experiments. Participants were asked to answer questionnaires that measure the manipulated factors after within-subjects conditions. Besides questionnaire data, we collected objective measures such as videos and physiological data. The questionnaire results show a correlation between comfort, sense of control, trust, and perceived safety. Since these factors are the main factors that influence perceived safety, they should be considered in human-robot interaction design decisions. We also discuss the effect of individual human characteristics (such as personality and gender) that they could be predictors of perceived safety. We used the physiological signal data and facial affect from videos for estimating perceived safety where participants’ subjective ratings were utilized as labels. The data from objective measures revealed that the prediction rate was higher from physiological signal data. This paper can play an important role in the goal of better understanding perceived safety in human-robot interaction.
This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field.
Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them self-adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The EU FP7 project RUBICON develops self-sustaining learning solutions yielding cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, agent control systems, wireless sensor networks and machine learning. This paper briefly illustrates how these techniques are being extended, integrated, and applied to AAL applications.
Local scan registration approaches commonlyonly utilize ego-motion estimates (e.g. odometry) as aninitial pose guess in an iterative alignment procedure. Thispaper describes a new method to incorporate ego-motionestimates, including uncertainty, into the objective function of aregistration algorithm. The proposed approach is particularlysuited for feature-poor and self-similar environments,which typically present challenges to current state of theart registration algorithms. Experimental evaluation showssignificant improvements in accuracy when using data acquiredby Automatic Guided Vehicles (AGVs) in industrial productionand warehouse environments.
In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.
This paper presents a local planning approach that is targeted for pseudo-omnidirectional vehicles: that is, vehicles that can drive sideways and rotate on the spot. This local planner—MSDU–is based on optimal control and formulates a non-linear optimization problem formulation that exploits the omni-motion capabilities of the vehicle to drive the vehicle to the goal in a smooth and efficient manner while avoiding obstacles and singularities. MSDU is designed for a real platform for mobile manipulation where one key function is the capability to drive in narrow and confined areas. The real-world evaluations show that MSDU planned paths that were smoother and more accurate than a comparable local path planner Timed Elastic Band (TEB), with a mean (translational, angular) error for MSDU of (0.0028 m, 0.0010 rad) compared to (0.0033 m, 0.0038 rad) for TEB. MSDU also generated paths that were consistently shorter than TEB, with a mean (translational, angular) distance traveled of (0.6026 m, 1.6130 rad) for MSDU compared to (0.7346 m, 3.7598 rad) for TEB.
Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints. Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favoring good grasps. We employ probabilistic rule learning to recover such object-task affordances for task-dependent grasping from realistic video data.
While any grasp must satisfy the grasping stability criteria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. We propose a probabilistic logic approach for robot grasping, which improves grasping capabilities by leveraging semantic object parts. It provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning. The probabilistic logic framework is task-dependent. It semantically reasons about pre-grasp configurations with respect to the intended task and employs object-task affordances and object/task ontologies to encode rules that generalize over similar object parts and object/task categories. The use of probabilistic logic for task-dependent grasping contrasts with current approaches that usually learn direct mappings from visual perceptions to task-dependent grasping points. The logic-based module receives data from a low-level module that extracts semantic objects parts, and sends information to the low-level grasp planner. These three modules define our probabilistic logic framework, which is able to perform robotic grasping in realistic kitchen-related scenarios.
This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.
An accurate model of gas emissions is of high importance in several real-world applications related to monitoring and surveillance. Gas tomography is a non-intrusive optical method to estimate the spatial distribution of gas concentrations using remote sensors. The choice of sensing geometry, which is the arrangement of sensing positions to perform gas tomography, directly affects the reconstruction quality of the obtained gas distribution maps. In this paper, we present an investigation of criteria that allow to determine suitable sensing geometries for gas tomography. We consider an actuated remote gas sensor installed on a mobile robot, and evaluated a large number of sensing configurations. Experiments in complex settings were conducted using a state-of-the-art CFD-based filament gas dispersal simulator. Our quantitative comparison yields preferred sensing geometries for sensor planning, which allows to better reconstruct gas distributions.
Creating an accurate model of gas emissions is an important task in monitoring and surveillance applications. A promising solution for a range of real-world applications are gas-sensitive mobile robots with spectroscopy-based remote sensors that are used to create a tomographic reconstruction of the gas distribution. The quality of these reconstructions depends crucially on the chosen sensing geometry. In this paper we address the problem of sensor planning by investigating sensing geometries that minimize reconstruction errors, and then formulate an optimization algorithm that chooses sensing configurations accordingly. The algorithm decouples sensor planning for single high concentration regions (hotspots) and subsequently fuses the individual solutions to a global solution consisting of sensing poses and the shortest path between them. The proposed algorithm compares favorably to a template matching technique in a simple simulation and in a real-world experiment. In the latter, we also compare the proposed sensor planning strategy to the sensing strategy of a human expert and find indications that the quality of the reconstructed map is higher with the proposed algorithm.
To study gas dispersion, several statistical gas distribution modelling approaches have been proposed recently. A crucial assumption in these approaches is that gas distribution models are learned from measurements that are generated by a time-invariant random process. While a time-independent random process can capture certain fluctuations in the gas distribution, more accurate models can be obtained by modelling changes in the random process over time. In this work we propose a time-scale parameter that relates the age of measurements to their validity for building the gas distribution model in a recency function. The parameters of the recency function define a time-scale and can be learned. The time-scale represents a compromise between two conflicting requirements for obtaining accurate gas distribution models: using as many measurements as possible and using only very recent measurements. We have studied several recency functions in a time-dependent extension of the Kernel DM+V algorithm (TD Kernel DM+V). Based on real-world experiments and simulations of gas dispersal (presented in this paper) we demonstrate that TD Kernel DM+V improves the obtained gas distribution models in dynamic situations. This represents an important step towards statistical modelling of evolving gas distributions.
The most common use of wireless sensor networks (WSNs) is to collect environmental data from a specificarea, and to channel it to a central processing node for on-line or off-line analysis. The WSN technology,however, can be used for much more ambitious goals. We claim that merging the concepts and technology ofWSN with the concepts and technology of distributed robotics and multi-agent systems can open new waysto design systems able to provide intelligent services in our homes and working places. We also claim thatendowing these systems with learning capabilities can greatly increase their viability and acceptability, bysimplifying design, customization and adaptation to changing user needs. To support these claims, we illus-trate our architecture for an adaptive robotic ecology, named RUBICON, consisting of a network of sensors,effectors and mobile robots.
We present an approach to make planning adaptive in order to enable context-aware mobile robot navigation. We integrate a model-based planner with a distributed learning system based on reservoir computing, to yield personalized planning and resource allocations that account for user preferences and environmental changes. We demonstrate our approach in a real robot ecology, and show that the learning system can effectively exploit historical data about navigation performance to modify the models in the planner, without any prior information oncerning the phenomenon being modeled. The plans produced by the adapted CL fail more rarely than the ones generated by a non-adaptive planner. The distributed learning system handles the new learning task autonomously, and is able to automatically identify the sensorial information most relevant for the task, thus reducing the communication and computational overhead of the predictive task.
The increasing isolation of the elderly both in their own homes and in care homes has made the problem of caring for elderly people who live alone an urgent priority. This article presents a proposed design for a heterogeneous multirobot system consisting of (i) a small mobile robot to monitor the well-being of elderly people who live alone and suggest activities to keep them positive and active and (ii) a domestic mobile manipulating robot that helps to perform household tasks. The entire system is integrated in an automated home environment (AAL), which also includes a set of low-cost automation sensors, a medical monitoring bracelet and an Android application to propose emotional coaching activities to the person who lives alone. The heterogeneous system uses ROS, IoT technologies, such as Node-RED, and the Home Assistant Platform. Both platforms with the home automation system have been tested over a long period of time and integrated in a real test environment, with good results. The semantic segmentation of the navigation and planning environment in the mobile manipulator for navigation and movement in the manipulation area facilitated the tasks of the later planners. Results about the interactions of users with the applications are presented and the use of artificial intelligence to predict mood is discussed. The experiments support the conclusion that the assistance robot correctly proposes activities, such as calling a relative, exercising, etc., during the day, according to the user's detected emotional state, making this is an innovative proposal aimed at empowering the elderly so that they can be autonomous in their homes and have a good quality of life.
This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.
In this paper we present an inspection robot to produce gas distribution maps and localize gas sources in large outdoor environments. The robot is equipped with a 3D laser range finder and a remote gas sensor that returns integral concentration measurements. We apply principles of tomography to create a spatial gas distribution model from integral gas concentration measurements. The gas distribution algorithm is framed as a convex optimization problem and it models the mean distribution and the fluctuations of gases. This is important since gas dispersion is not an static phenomenon and furthermore, areas of high fluctuation can be correlated with the location of an emitting source. We use a compact surface representation created from the measurements of the 3D laser range finder with a state of the art mapping algorithm to get a very accurate localization and estimation of the path of the laser beams. In addition, a conic model for the beam of the remote gas sensor is introduced. We observe a substantial improvement in the gas source localization capabilities over previous state-of-the-art in our evaluation carried out in an open field environment.
Whenever people move through their environments they do not move randomly. Instead, they usually follow specific trajectories or motion patterns corresponding to their intentions. Knowledge about such patterns enables a mobile robot to robustly keep track of persons in its environment and to improve its behavior. This paper proposes a technique for learning collections of trajectories that characterize typical motion patterns of persons. Data recorded with laser-range finders is clustered using the expectation maximization algorithm. Based on the result of the clustering process we derive a Hidden Markov Model (HMM) that is applied to estimate the current and future positions of persons based on sensory input. We also describe how to incorporate the probabilistic belief about the potential trajectories of persons into the path planning process. We present several experiments carried out in different environments with a mobile robot equipped with a laser range scanner and a camera system. The results demonstrate that our approach can reliably learn motion patterns of persons, can robustly estimate and predict positions of persons, and can be used to improve the navigation behavior of a mobile robot.
The response of zeolite-modified sensors, prepared by screen printing layers of chromium titanium oxide (CTO), were compared to unmodified tin oxide sensors using amplitude and transient responses. For transient responses we used a family of features, derived from the exponential moving average (EMA), to characterize chemo-resistive responses. All sensors were tested simultaneously against 20 individual volatile compounds from four chemical groups. The responses of the two types of sensors showed some independence. The zeolite modified CTO sensors discriminated compounds better using either amplitude response or EMA features and CTO-modified sensors also responded three times faster.
Intelligent Mobile Robots are increasingly used in unstructured domains; one particularly challenging example for this is, planetary exploration. The preparation of according missions is highly non-trivial, especially as it is difficult to carry out realistic experiments without, very sophisticated infrastructures. In this paper, we argue that, the, Unified System for Automation and Robot Simulation (USARSim) offers interesting opportunities for research on planetary exploration by mobile robots. With the example of work on terrain classification, it, is shown how synthetic as well as real world data, from Mars call be used to test an algorithm's performance in USARSim. Concretely, experiments with an algorithm for the detection of negotiable ground oil a, planetary surface are presented. It is shown that the approach performs fast; and robust on planetary surfaces.
Technological innovation in robotics and ICT represents an effective solution to tackle the challenge of providing social sustainable care services for the ageing population. The recent introduction of cloud technologies is opening new opportunities for the provisioning of advanced robotic services based on the cooperation of a number of connected robots, smart environments and devices improved by the huge cloud computational and storage capability. In this context, this paper aims to investigate and assess the potentialities of a cloud robotic system for the provisioning of assistive services for the promotion of active and healthy ageing. The system comprised two different smart environments, located in Italy and Sweden, where a service robot is connected to a cloud platform for the provisioning of localization based services to the users. The cloud robotic services were tested in the two realistic environments to assess the general feasibility of the solution and demonstrate the ability to provide assistive location based services in a multiple environment framework. The results confirmed the validity of the solution but also suggested a deeper investigation on the dependability of the communication technologies adopted in such kind of systems.
This paper describes the development and validation of the currently smallest aerial platform with olfaction capabilities. The developed Smelling Nano Aerial Vehicle (SNAV) is based on a lightweight commercial nano-quadcopter (27 g) equipped with a custom gas sensing board that can host up to two in situ metal oxide semiconductor (MOX) gas sensors. Due to its small form-factor, the SNAV is not a hazard for humans, enabling its use in public areas or inside buildings. It can autonomously carry out gas sensing missions of hazardous environments inaccessible to terrestrial robots and bigger drones, for example searching for victims and hazardous gas leaks inside pockets that form within the wreckage of collapsed buildings in the aftermath of an earthquake or explosion. The first contribution of this work is assessing the impact of the nano-propellers on the MOX sensor signals at different distances to a gas source. A second contribution is adapting the ‘bout’ detection algorithm, proposed by Schmuker et al. (2016) to extract specific features from the derivative of the MOX sensor response, for real-time operation. The third and main contribution is the experimental validation of the SNAV for gas source localization (GSL) and mapping in a large indoor environment (160 m2) with a gas source placed in challenging positions for the drone, for example hidden in the ceiling of the room or inside a power outlet box. Two GSL strategies are compared, one based on the instantaneous gas sensor response and the other one based on the bout frequency. From the measurements collected (in motion) along a predefined sweeping path we built (in less than 3 min) a 3D map of the gas distribution and identified the most likely source location. Using the bout frequency yielded on average a higher localization accuracy than using the instantaneous gas sensor response (1.38 m versus 2.05 m error), however accurate tuning of an additional parameter (the noise threshold) is required in the former case. The main conclusion of this paper is that a nano-drone has the potential to perform gas sensing tasks in complex environments.
Gas distribution modelling can provide potentially life-saving information when assessing the hazards of gaseous emissions and for localization of explosives, toxic or flammable chemicals. In this work, we deployed a three-dimensional (3D) grid of metal oxide semiconductor (MOX) gas sensors deployed in an office room, which allows for novel insights about the complex patterns of indoor gas dispersal. 12 independent experiments were carried out to better understand dispersion patters of a single gas source placed at different locations of the room, including variations in height, release rate and air flow profiles. This dataset is denser and richer than what is currently available, i.e., 2D datasets in wind tunnels. We make it publicly available to enable the community to develop, validate, and compare new approaches related to gas sensing in complex environments.
Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.
The increasingly ageing population and the tendency to live alone have led science and engineering researchers to search for health care solutions. In the COVID 19 pandemic, the elderly have been seriously affected in addition to suffering from isolation and its associated and psychological consequences. This paper provides an overview of the RobWell (Robotic-based Well-Being Monitoring and Coaching System for the Elderly in their Daily Activities) system. It is a system focused on the field of artificial intelligence for mood prediction and coaching. This paper presents a general overview of the initially proposed system as well as the preliminary results related to the home automation subsystem, autonomous robot navigation and mood estimation through machine learning prior to the final system integration, which will be discussed in future works. The main goal is to improve their mental well-being during their daily household activities. The system is composed of ambient intelligence with intelligent sensors, actuators and a robotic platform that interacts with the user. A test smart home system was set up in which the sensors, actuators and robotic platform were integrated and tested. For artificial intelligence applied to mood prediction, we used machine learning to classify several physiological signals into different moods. In robotics, it was concluded that the ROS autonomous navigation stack and its autodocking algorithm were not reliable enough for this task, while the robot's autonomy was sufficient. Semantic navigation, artificial intelligence and computer vision alternatives are being sought.
Voxel volumes are simple to implement and lend themselves to many of the tools and algorithms available for 2D images. However, the additional dimension of voxels may be costly to manage in memory when mapping large spaces at high resolutions. While lowering the resolution and using interpolation is common work-around, in the literature we often find that authors either use trilinear interpolation or nearest neighbors and rarely any of the intermediate options. This paper presents a survey of geometric interpolation methods for voxel-based map representations. In particular we study the truncated signed distance field (TSDF) and the impact of using fewer than 8 samples to perform interpolation within a depth-camera pose tracking and mapping scenario. We find that lowering the number of samples fetched to perform the interpolation results in performance similar to the commonly used trilinear interpolation method, but leads to higher framerates. We also report that lower bit-depth generally leads to performance degradation, though not as much as may be expected, with voxels containing as few as 3 bits sometimes resulting in adequate estimation of camera trajectories.
In this paper the problem of multi-robot collaborative topological map-building is addressed. In this framework, a team of robots is supposed to move in an indoor office-like environment. Each robot, after building a local map by using infrared range-finders, achieves a topological representation of the environment by extracting the most significant features via the Hough transform and comparing them with a set of predefined environmental patterns. The local view of each robot which is significantly constrained by its limited sensing capabilities is then strengthened by a collaborative aggregation schema based on the Transferable Belief Model (TBM). In this way, a better representation of the environment is achieved by each robot with a minimal exchange of information. A preliminary experimental validation carried out by exploiting data collected from a self-made team of robots is proposed.
A standing challenge in current intralogistics is to reliably, effectively yet safely coordinate large-scale, heterogeneous multi-robot fleets without posing constraints on the infrastructure or unrealistic assumptions on robots. A centralized approach, proposed by some of the authors in prior work, allows to overcome these limitations with medium-scale fleets (i.e., tens of robots). With the aim of scaling to hundreds of robots, in this paper we explore a de-centralized variant of the same approach. The proposed framework maintains the key features of the original approach, namely, ensuring safety despite uncertainties on robot motions, and generality with respect to robot platforms, motion planners and controllers. We include considerations on liveness and solutions to prevent or recover from deadlocks in specific situations are reported and discussed. We validate the approach empirically with simulated, large, heterogeneous multi-robot fleets (up to 100 robots tested) operating both in benchmark and realistic environments.
The synthesis of multi-fingered grasps on nontrivial objects requires a realistic representation of the contact between the fingers of a robotic hand and an object. In this work, we use a patch contact model to approximate the contact between a rigid object and a deformable anthropomorphic finger. This contact model is utilized in the computation of Independent Contact Regions (ICRs) that have been proposed as a way to compensate for shortcomings in the finger positioning accuracy of robotic grasping devices. We extend the ICR algorithm to account for the patch contact model and show the benefits of this solution.
The synthesis and evaluation of multi-fingered grasps on complex objects is a challenging problem that has received much attention in the robotics community. Although several promising approaches have been developed, applications to real-world systems are limited to simple objects or gripper configurations. The paradigm of Independent Contact Regions (ICRs) has been proposed as a way to increase the tolerance to grasp positioning errors. This concept is well established, though only on precise geometric object models. This work is concerned with the application of the ICR paradigm to models reconstructed from real-world range data. We propose a method for increasing the robustness of grasp synthesis on uncertain geometric models. The sensitivity of the ICR algorithm to noisy data is evaluated and a filtering approach is proposed to improve the quality of the final result.
A growing interest in the industrial sector for autonomous ground vehicles has prompted significant investment in fleet management systems. Such systems need to accommodate on-line externally imposed temporal and spatial requirements, and to adhere to them even in the presence of contingencies. Moreover, a fleet management system should ensure correctness, i.e., refuse to commit to requirements that cannot be satisfied. We present an approach to obtain sets of alternative execution patterns (called trajectory envelopes) which provide these guarantees. The approach relies on a constraint-based representation shared among multiple solvers, each of which progressively refines trajectory envelopes following a least commitment principle.
Coordinating fleets of autonomous, non-holonomic vehicles is paramount to many industrial applications. While there exists solutions to efficiently calculate trajectories for individual vehicles, an effective methodology to coordinate their motions and to avoid deadlocks is still missing. Decoupled approaches, where motions are calculated independently for each vehicle and then centrally coordinated for execution, have the means to identify deadlocks, but not to solve all of them. We present a novel approach that overcomes this limitation and that can be used to complement the deficiencies of decoupled solutions with centralized coordination. Here, we formally define an extension of the framework of lattice-based motion planning to multi-robot systems and we validate it experimentally. Our approach can jointly plan for multiple vehicles and it generates kinematically feasible and deadlock-free motions.
There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situations
A concept of a suspended robot for surface cleaning in silos is presented in this paper. The main requirements and limitations resulting from the specific operational conditions are discussed. Due to the large dimension of the silo as a confined space, specific kinematics of the robot manipulator is proposed. The major problems in its design are highlighted and an approach to resolve them is proposed. The suggested concept is a reasonable compromise between the basic contradicting factors in the design: small entrance and large surface of the confined space, suspension and stabilization of the robot
We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.