oru.sePublikationer
Ändra sökning
Avgränsa sökresultatet
456789 301 - 350 av 447
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 301.
    Otterskog, Magnus
    et al.
    Örebro universitet, Institutionen för teknik.
    Madsén, Kent
    Cell phone performance testing and propagation environment modelling in a reverberation chamber2003Ingår i: Proceedings of The 2003 Reverberation Chamber, Anechoic Chamber and OATS Users Meeting, 2003Konferensbidrag (Refereegranskat)
  • 302.
    Otterskog, Magnus
    et al.
    Örebro universitet, Institutionen för teknik.
    Madsén, Kent
    On creating a nonisotropic propagation environment inside a scattered field chamber2004Ingår i: Microwave and optical technology letters (Print), ISSN 0895-2477, E-ISSN 1098-2760, Vol. 43, nr 3, s. 192-195Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A traditional reverberation chamber creates a statistically isotropic test environment. Tests of communication devices may demand different environments to be able to test, for example, the influence of a diversity antenna. Here, a simple way of altering the distribution of plane waves incident on the device under test is presented. © 2004 Wiley Periodicals, Inc.

  • 303. Pagello, Enrico
    et al.
    Menegatti, Emanuele
    Bredenfel, Ansgar
    Costa, Paulo
    Christaller, Thomas
    Jacoff, Adam
    Polani, Daniel
    Riedmiller, Martin
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Sklar, Elizabeth
    Tomoichi, Takashi
    RoboCup-2003: new scientific and technical advances2004Ingår i: The AI Magazine, ISSN 0738-4602, Vol. 25, nr 2, s. 81-98Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article reports on the RoboCup-2003 event. RoboCup is no longer just the Soccer World Cup for autonomous robots but has evolved to become a coordinated initiative encompassing four different robotics events: (1) Soccer, (2) Rescue, (3) Junior (focused on education), and (4) a Scientific Symposium. RoboCup/2003 took place from 2 to 11 July 2003 in Padua (Italy); it was colocated with other sceintific events in the field of AI and robotics. In this article, in addition to reporting on the results of the games, we highlight the robotics and AI technologies exploited by the teams in the different leagues and describe the most meaningful scientific contributions

  • 304. Pagello, Enrico
    et al.
    Menegatti, Emanuele
    Bredenfeld, Ansgar
    Costa, Paulo
    Christaller, Thomas
    Jacoff, Adam
    Johnson, Jeffrey
    Riedmiller, Martin
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Tomoichi, Takashi
    Overview of RoboCup 2003 competition and conferences2004Ingår i: RoboCup / [ed] Daniel Polani, Brett Browning, Andrea Bonarini, Kazuo Yoshida, 2004, Vol. 7, s. 1-14Konferensbidrag (Refereegranskat)
    Abstract [en]

    RoboCup 2003, the seventh RoboCup Competition and Conference, took place between July the 2nd and July the 11th 2003 in Padua (Italy). The teams had three full days to setup their robots. The competitions were held in the new pavilion no7 of the Fair of Padua (Fig. 1). Several scientific events in the field of Robotics and Artificial Intelligence were held in parallel to the competitions. The RoboCup Symposium was held in the last two days. The opening talks took place in the historical Main Hall of the University of Padua and the three parallel Symposium sections in the conference rooms of the Fair of Padua.

  • 305.
    Palm, Rainer
    Örebro universitet, Institutionen för teknik.
    Multiple-step-ahead prediction in control systems with Gaussian process models and TS-fuzzy models2007Ingår i: Engineering applications of artificial intelligence, ISSN 0952-1976, E-ISSN 1873-6769, Vol. 20, nr 8, s. 1023-1035Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper one-step-ahead and multiple-step-ahead predictions of time series in disturbed open loop and closed loop systems using Gaussian process models and TS-fuzzy models are described. Gaussian process models are based on the Bayesian framework where the conditional distribution of output measurements is used for the prediction of the system outputs. For one-step-ahead prediction a local process model with a small past horizon is built online with the help of Gaussian processes. Multiple-step-ahead prediction requires the knowledge of previous outputs and control values as well as the future control values. A "naive" multiple-step-ahead prediction is a successive one-step-ahead prediction where the outputs in each consecutive step are used as inputs for the next step of prediction. A global TS-fuzzy model is built to generate the nominal future control trajectory for multiple-step-ahead prediction. In the presence of model uncertainties a correction of the so computed control trajectory is needed. This is done by an internal feedback between the two process models. The method is tested on disturbed time invariant and time variant systems for different past horizons. The combination of the TS-fuzzy model and the Gaussian process model together with a correction of the control trajectory shows a good performance of the multiple-step-ahead prediction for systems with uncertainties. © 2007 Elsevier Ltd. All rights reserved.

  • 306.
    Palm, Rainer
    et al.
    Örebro universitet, Institutionen för teknik.
    Iliev, Boyko
    Örebro universitet, Institutionen för teknik.
    Segmentation and recognition of human grasps for programming-by-demonstration using time-clustering and fuzzy modeling2007Ingår i: IEEE international fuzzy systems conference, FUZZ-IEEE 2007, New York: IEEE , 2007, s. 1-6Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this article we address the problem of programming by demonstration (PbD) of grasping tasks for a five-fingered robotic hand. The robot is instructed by a human operator wearing a data glove capturing the hand poses. For a number of human grasps, the corresponding fingertip trajectories are modeled in time and space by fuzzy clustering and Takagi-Sugeno modeling. This so-called time-clustering leads to grasp models using the time as input parameter and the fingertip positions as outputs. For a test sequence of grasps the control system of the robot hand identifies the grasp segments, classifies the grasps and generates the sequence of grasps shown before. For this purpose, each grasp is correlated with a training sequence. By means of a hybrid fuzzy model the demonstrated grasp sequence can be reconstructed.

  • 307. Parsons, Simon
    et al.
    Pettersson, Ola
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Wooldridge, Michael
    Intention reconsideration in theory and practice2000Ingår i: Proceedings of the 14th European conference on artificial intelligence: ECAI, 2000, s. 378-382Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous agents operating in complex dynamic environments need the ability to integrate robust plan execution with higher level reasoning. This paper describes work to combine low level navigation techniques drawn from mobile robotics with deliberation techniques drawn from intelligent agents. In particular, we discuss the combination of a navigation system based on fuzzy logic with a deliberator based on the belief/desire/intention (BDI) model. We discuss some of the subtleties involved in this integration, and illustrate it with an example. 1 INTRODUCTION Milou the robot works in a food factory. He has to regularly go and fetch two food samples (potato crisps) from two production lines in two different rooms, A and B, and take them to an electronic tester in the quality control lab. Milou must now plan his next delivery. He decides to get the sample from A first, since room A is closer than B. While going there, however, he finds the main door to that room closed

  • 308.
    Parsons, Simon
    et al.
    Queen Mary and Westfield College, University of London.
    Pettersson, Ola
    Örebro universitet, Institutionen för teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Wooldridge, Michael
    Queen Mary and Westfield College, University of London.
    Robots with the best of intentions1999Ingår i: Artificial intelligence today: recent trends and developments / [ed] Michael J. Wooldridge, Manuela Veloso, Berlin: Springer, 1999, s. 329-338Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    Intelligent mobile robots need the ability to integrate robust navigation facilities with higher level reasoning. This paper is an attempt at combining results and techniques from the areas of robot navigation and of intelligent agency. We propose to integrate an existing navigation system based on fuzzy logic with a deliberator based on the so-called BDI model. We discuss some of the subtleties involved in this integration, and illustrate it on a simulated example. Experiments on a real mobile robot are under way

  • 309.
    Pecora, Federico
    et al.
    Örebro universitet, Institutionen för teknik.
    Cesta, Amedeo
    Dcop for smart homes: a case study2007Ingår i: Computational intelligence, ISSN 0824-7935, E-ISSN 1467-8640, Vol. 23, nr 4, s. 395-419Artikel i tidskrift (Refereegranskat)
  • 310.
    Pecora, Federico
    et al.
    Örebro universitet, Institutionen för teknik.
    Cesta, Amedeo
    Evaluating plans through restrictiveness and resource strength2005Ingår i: Proceedings of Workshop on Integrating Planning into Scheduling (WIPIS) at ICAPS'05, 2005Konferensbidrag (Refereegranskat)
  • 311.
    Pecora, Federico
    et al.
    Örebro universitet, Institutionen för teknik.
    Cesta, Amedeo
    Planning and scheduling ingredients for a multi-agent system2002Ingår i: Proceedings of UK Planning and Scheduling SIG (PlanSig'02), 2002Konferensbidrag (Refereegranskat)
  • 312.
    Pecora, Federico
    et al.
    Örebro universitet, Institutionen för teknik.
    Modi, Jay P.
    Scerri, Paul
    Reasoning about and dynamically posting n-ary constraints in ADOPT2006Ingår i: Proceedings of Workshop on Distributed Constraint Reasoning (DCR) at AAMAS'06, 2006Konferensbidrag (Övrigt vetenskapligt)
  • 313.
    Pecora, Federico
    et al.
    Örebro universitet, Institutionen för teknik.
    Rasconi, Riccardo
    Cesta, Amedeo
    Assessing the bias of classical planning strategies on makespan-optimizing scheduling2004Ingår i: Proceedings of the European Conference on Artificial Intelligence (ECAI), 2004Konferensbidrag (Refereegranskat)
  • 314.
    Persson, Martin
    Örebro universitet, Institutionen för teknik.
    A simulation environment for visual servoing2002Licentiatavhandling, monografi (Övrigt vetenskapligt)
  • 315.
    Persson, Martin
    Örebro universitet, Institutionen för teknik.
    Semantic mapping using virtual sensors and fusion of aerial images with sensor data from a ground vehicle2008Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    In this thesis, semantic mapping is understood to be the process of putting a tag or label on objects or regions in a map. This label should be interpretable by and have a meaning for a human. The use of semantic information has several application areas in mobile robotics. The largest area is in human-robot interaction where the semantics is necessary for a common understanding between robot and human of the operational environment. Other areas include localization through connection of human spatial concepts to particular locations, improving 3D models of indoor and outdoor environments, and model validation.

    This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps and aerial images. The thesis concentrates on three related issues: i) recognition of human spatial concepts in a scene, ii) the ability to incorporate semantic knowledge in a map, and iii) the ability to connect information collected by a mobile robot with information extracted from an aerial image.

    The first issue deals with a vision-based virtual sensor for classification of views (images). The images are fed into a set of learned virtual sensors, where each virtual sensor is trained for classification of a particular type of human spatial concept. The virtual sensors are evaluated with images from both ordinary cameras and an omni-directional camera, showing robust properties that can cope with variations such as changing season.

    In the second part a probabilistic semantic map is computed based on an occupancy grid map and the output from a virtual sensor. A local semantic map is built around the robot for each position where images have been acquired. This map is a grid map augmented with semantic information in the form of probabilities that the occupied grid cells belong to a particular class. The local maps are fused into a global probabilistic semantic map covering the area along the trajectory of the mobile robot.

    In the third part information extracted from an aerial image is used to improve the mapping process. Region and object boundaries taken from the probabilistic semantic map are used to initialize segmentation of the aerial image. Algorithms for both local segmentation related to the borders and global segmentation of the entire aerial image, exemplified with the two classes ground and buildings, are presented. Ground-level semantic information allows focusing of the segmentation of the aerial image to desired classes and generation of a semantic map that covers a larger area than can be built using only the onboard sensors.

  • 316.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, Uk.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping2007Ingår i: Proceedings of the IROS Workshop "From Sensors to Human Spatial Concepts", 2007, s. 17-24Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. A ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a ground-level semantic map that covers a larger area than can be built using the onboard sensors.

  • 317.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Improved mapping and image segmentation by using semantic information to link aerial images and ground-level information2007Ingår i: Proceedings of the IEEE international conference on advanced robotics: ICAR 2007, 2007, s. 924-929Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper investigates the use of semantic information to link ground-level occupancy maps and aerial images. In the suggested approach a ground-level semantic map is obtained by a mobile robot equipped with an omnidirectional camera, differential GPS and a laser range finder. The mobile robot uses a virtual sensor for building detection (based on omnidirectional images) to compute the ground-level semantic map, which indicates the probability of the cells being occupied by the wall of a building. These wall estimates from a ground perspective are then matched with edges detected in an aerial image. The result is used to direct a region- and boundary-based segmentation algorithm for building detection in the aerial image. This approach addresses two difficulties simultaneously: 1) the range limitation of mobile robot sensors and 2) the difficulty of detecting buildings in monocular aerial images. With the suggested method building outlines can be detected faster than the mobile robot can explore the area by itself, giving the robot an ability to "see" around corners. At the same time, the approach can compensate for the absence of elevation data in segmentation of aerial images. Our experiments demonstrate that ground-level semantic information (wall estimates) allows to focus the segmentation of the aerial image to find buildings and produce a groundlevel semantic map that covers a larger area than can be built using the onboard sensors along the robot trajectory.

  • 318.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2007Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, nr 5, s. 383-390Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. (c) 2006 Elsevier B.V. All rights reserved.

  • 319.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, UK.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Virtual sensors for human concepts: building detection by an outdoor mobile robot2006Ingår i: Proceedings of the IROS 2006 workshop: From Sensors toHuman Spatial Concepts, IEEE, 2006, s. 21-26Konferensbidrag (Refereegranskat)
    Abstract [en]

    In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.

  • 320.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Department of Computing and Informatics, University of Lincoln, Lincoln, United Kingdom.
    Valgren, Christoffer
    Department of Technology, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för teknik.
    Probabilistic semantic mapping with a virtual sensor for building/nature detection2007Ingår i: Proceedings of the 2007 IEEE International symposium on computational intelligence in robotics and automation, CIRA 2007, New York, NY, USA: IEEE, 2007, s. 236-242, artikel-id 4269870Konferensbidrag (Refereegranskat)
    Abstract [en]

    In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between "building" objects from "nature" objects

  • 321.
    Persson, Martin
    et al.
    Örebro universitet, Institutionen för teknik.
    Wide, Peter
    Örebro universitet, Institutionen för teknik.
    Using a sensor source intelligence cell to connect and distribute visual information from a commercial game engine in a disaster management exercise2007Ingår i: IEEE instrumentation and measurement technology conference proceedings, IMTC 2007, New York: IEEE , 2007, s. 1-5Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a system where different scenarios can be played in a synthetic natural environment in form of a modified commercial game used for scenario simulation. This environment is connected to a command and control system that can visualize, process, store, and distribute sensor data and their interpretations within several command levels. It is specifically intended for mobile sensors used in remote sensing tasks. The system has been used in a disaster management exercise and there distributed information from a virtual accident to different command levels in the crisis management. The information consisted of live and recorded video, reports and map objects.

  • 322.
    Pettersson, David
    et al.
    Örebro universitet, Institutionen för teknik.
    Yukhin, Boris
    Örebro universitet, Institutionen för teknik.
    INOMHUSMILJÖ I SMÅHUSMED FTX-VENTILATION: EN LITTERATURSTUDIE MED ENKÄTUNDERSÖKNINGAR OCHMÄTNINGAR2009Självständigt arbete på grundnivå (högskoleexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
  • 323.
    Pettersson, Ola
    Örebro universitet, Institutionen för teknik.
    Execution monitoring in robotics: a survey2005Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 53, nr 2, s. 73-88Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Research on execution monitoring in its own is still not very common within the field of robotics and autonomous systems. It is more common that researchers interested in control architectures or execution planning include monitoring as a small part of their work when they realize that it is needed. On the other hand, execution monitoring has been a well studied topic within industrial control, although control theorists seldom use this term. Instead they refer to the problem of fault detection and isolation (FDI).

    This survey will use the knowledge and terminology from industrial control in order to classify different execution monitoring approaches applied to robotics. The survey is particularly focused on autonomous mobile robotics.

  • 324.
    Pettersson, Ola
    Örebro universitet, Institutionen för teknik.
    Model-free execution monitoring in behavior-based mobile robotics2004Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    In the near future, autonomous mobile robots are expected to assist us by performing service tasks in many different areas, including transportation, cleaning, mining, or agriculture. In order to manage these tasks in a changing and partially unpredictable environment, without the help of humans, the robot must have the ability to plan its actions and to execute them robustly and in a safe way. Since the real world is dynamic and not fully predictable, the robot must also have the ability to detect when the execution does not proceed as planned, and to correctly identify the causes of the failure. An execution monitoring system is a system that allows the robot to detect and classify these failures.

    Most current approaches to execution monitoring in robotics are based on the idea of predicting the outcomes of the robot's actions by using some sort of model, and comparing the predicted outcomes with the observed ones. In contrary, this thesis explores the use of model-free approaches to execution monitoring, that is, approaches that do not use predictive models. These approaches solely observe the actual execution of the robot, and detect certain patterns that indicate a problem.

    In this thesis, we show that pattern recognition techniques can be applied to realize model-free execution monitoring by classifying observed behavioral patterns into normal or faulty behaviors. We investigate the use of several such techniques, and verify their utility in a number of experiments involving the navigation of a mobile robot in indoor environments. Statistical measures are used to compare the results given from several realistic simulations. Our approach has also been successfully tested on a real robot navigating in an office environment. Interesting, this test has shown that we can train a model-free execution monitor in simulation, and then use it in a real robot.

  • 325.
    Pettersson, Ola
    et al.
    Örebro universitet, Institutionen för teknik.
    Karlsson, Lars
    Örebro universitet, Institutionen för teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Model-free execution monitoring by learning from simulation2005Ingår i: 2005 IEEE International Symposium on Computational Intelligence in Robotics and Automation: CIRA 2005, 2005, s. 505-511Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous robots need the ability to plan their actions and to execute them robustly and in a safe way in face of a changing and partially unpredictable environment. This is especially important if we want to design autonomous robots that can safely co-habitate with humans. In order to manage this, these robots need the ability to detect when the execution does not proceed as planned, and to correctly identify the causes of the failure. An execution monitoring system is a system that allows the robot to detect and classify these failures. In this work we show that pattern recognition techniques can be applied to realize execution monitoring by classifying observed behavioral patterns into normal or faulty behaviors. The approach has been successfully tested on a real robot navigating in an office environment. Interesting, these tests show that we can train an execution monitor in simulation, and then use it in a real robot.

  • 326.
    Pettersson, Ola
    et al.
    Örebro universitet, Institutionen för teknik.
    Karlsson, Lars
    Örebro universitet, Institutionen för teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Model-free execution monitoring in behavior-based mobile robotics2003Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a model-free execution monitor for behavior-based mobile robots. By model-free we mean that the monitoring is based only on the actual execution, without involving any predictive models of the controlled system. Model-free monitors are especially suitable for systems where it is hard to obtain adequate models. In our approach we analyze the activation levels of the different behaviors using a pattern recognition technique. Our model-free execution monitor, which is realized by radial basis function networks, is shown to give a high performance in several realistic simulations.

  • 327. Pettersson, Ola
    et al.
    Karlsson, Lars
    Örebro universitet, Institutionen för teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Model-Free Execution Monitoring in Behavior-Based Robotics2007Ingår i: IEEE transactions on systems, man and cybernetics. Part B. Cybernetics, ISSN 1083-4419, E-ISSN 1941-0492, Vol. 37, nr 4, s. 890-901Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the near future, autonomous mobile robots are expected to help humans by performing service tasks in many different areas, including personal assistance, transportation, cleaning, mining, or agriculture. In order to manage these tasks in a changing and partially unpredictable environment without the aid of humans, the robot must have the ability to plan its actions and to execute them robustly and safely. The robot must also have the ability to detect when the execution does not proceed as planned and to correctly identify the causes of the failure. An execution monitoring system allows the robot to detect and classify these failures. Most current approaches to execution monitoring in robotics are based on the idea of predicting the outcomes of the robot’s actions by using some sort of predictive model and comparing the predicted outcomes with the observed ones. In contrary, this paper explores the use of model-free approaches to execution monitoring, that is, approaches that do not use predictive models. In this paper, we show that pattern recognition techniques can be applied to realize model-free execution monitoring by classifying observed behavioral patterns into normal or faulty execution. We investigate the use of several such techniques and verify their utility in a number of experiments involving the navigation of a mobile robot in indoor environments.

  • 328.
    Pettersson, Ola
    et al.
    Örebro universitet, Institutionen för teknik.
    Karlsson, Lars
    Örebro universitet, Institutionen för teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Steps towards model-free execution monitoring on mobile robots2002Ingår i: Proceedings of the 2nd Swedish workshop on autonomous robots, 2002, s. 45-52Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a model-free execution monitor for behavior-based mobile robots. By model-free we mean that the monitoring is based only on the actual execution, without involving any predictive models of the controlled system. Model-free monitors are especially suitable for systems where it is hard to obtain adequate models. In our approach we analyze the activation levels of the different behaviors using a pattern recognition technique. Our model-free execution monitor, which is realized by radial basis function networks, is shown to give a high performance in several realistic simulations.

  • 329.
    Rahayem, Mohamed
    Örebro universitet, Institutionen för teknik.
    An industrial robot as part of an automatic system for Geometric Reverse Engineering2008Ingår i: Robot manipulators / [ed] Marco Ceccarelli, Vienna, Austria: InTech , 2008, s. 441-458Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    In this book we have grouped contributions in 28 chapters from several authors all around the world on the several aspects and challenges of research and applications of robots with the aim to show the recent advances and problems that still need to be considered for future improvements of robot success in worldwide frames. Each chapter addresses a specific area of modeling, design, and application of robots but with an eye to give an integrated view of what make a robot a unique modern system for many different uses and future potential applications. Main attention has been focused on design issues as thought challenging for improving capabilities and further possibilities of robots for new and old applications, as seen from today technologies and research programs. Thus, great attention has been addressed to control aspects that are strongly evolving also as function of the improvements in robot modeling, sensors, servo-power systems, and informatics. But even other aspects are considered as of fundamental challenge both in design and use of robots with improved performance and capabilities, like for example kinematic design, dynamics, vision integration.

  • 330.
    Rahayem, Mohamed
    Örebro universitet, Institutionen för teknik.
    Planar segmentation for Geometric Reverse Engineering using data from a laser profile scanner mounted on an industrial robot2008Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Laser scanners in combination with devices for accurate orientation like Coordinate Measuring Machines (CMM) are often used in Geometric Reverse Engineering (GRE) to measure point data. The industrial robot as a device for orientation has relatively low accuracy but the advantage of being numerically controlled, fast, flexible, rather cheap and compatible with industrial environments. It is therefore of interest to investigate if it can be used in this application.

    This thesis will describe a measuring system consisting of a laser profile scanner mounted on an industrial robot with a turntable. It will also give an introduction to Geometric Reverse Engineering (GRE) and describe an automatic GRE process using this measuring system. The thesis also presents a detailed accuracy analysis supported by experiments that show how 2D profile data can be used to achieve a higher accuracy than the basic accuracy of the robot. The core topic of the thesis is the investigation of a new technique for planar segmentation. The new method is implemented in the GRE system and compared with an implementation of a more traditional method.

    Results from practical experiments show that the new method is much faster while equally accurate or better.

    Delarbeten
    1. Accuracy analysis of a 3D measurement system based on a laser profile scanner mounted on an industrial robot with a turntable
    Öppna denna publikation i ny flik eller fönster >>Accuracy analysis of a 3D measurement system based on a laser profile scanner mounted on an industrial robot with a turntable
    2007 (Engelska)Ingår i: Proceedings of ETFA 12th IEEE conference on Emerging technologies and Factory Automation, 2007, s. 880-883Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    High accuracy 3D laser measurment systems are used in applications like inspection and reverse engineering (RE). With automatic RE in mind, we have designed and built a system that is based on a laser profile scanner mounted on a standard industrial robot with a turntable. This paper is concerned with the relatively complex accuracy issues of such a system. The different parts of the system are analyzed individually and a brief discussion of how they interact is given. Finally a detailed analysis of the scanner head along with experimental results is presented.

    Nationell ämneskategori
    Teknik och teknologier Datavetenskap (datalogi)
    Forskningsämne
    Datavetenskap
    Identifikatorer
    urn:nbn:se:oru:diva-2983 (URN)10.1109/EFTA.2007.4416872 (DOI)978-1-4244-0825-2 (ISBN)
    Konferens
    IEEE Conference on Emerging Technologies and Factory Automation, ETFA. 25-28 Sept. 2007
    Tillgänglig från: 2008-06-11 Skapad: 2008-06-11 Senast uppdaterad: 2018-01-13Bibliografiskt granskad
    2. Geometric reverse engineering using a laser profile scanner mounted on an industrial robot
    Öppna denna publikation i ny flik eller fönster >>Geometric reverse engineering using a laser profile scanner mounted on an industrial robot
    2008 (Engelska)Ingår i: Proceedings of the 6th international conference of DAAAM Baltic industrial engineering / [ed] Rein Kyttner, 2008, s. 147-152Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Laser scanners in combination with accurate orientation devices are often used in Geometric Reverse Engineering (GRE) to measure point data. The industrial robot as a device for orientation has relatively low accuracy but the advantage of being numerically controlled, fast, and flexible and it is therefore of interest to investigate if it can be used in this application. We have built a measuring system based on a laser profile scanner mounted on an industrial robot. In this paper we present results from practical tests based on point data. We also show how data from laser profiles can be used to increase accuracy in some cases. Finally we propose a new method for plane segmentation using laser profiles.

    Nyckelord
    Geometric Reverse engineering, 3D measurement systems, laser scanning, segmentation, region growing
    Nationell ämneskategori
    Teknik och teknologier
    Forskningsämne
    Maskinteknik
    Identifikatorer
    urn:nbn:se:oru:diva-2984 (URN)978-1-9985-59-783-5 (ISBN)
    Konferens
    6th International conference of DAAAM Baltic industrial engineering, 24-26 April 2008, Tallinn, Estonia
    Tillgänglig från: 2008-06-11 Skapad: 2008-06-11 Senast uppdaterad: 2017-10-18Bibliografiskt granskad
    3. Planar segmentation of data from a laser profile scanner mounted on an industrial robot
    Öppna denna publikation i ny flik eller fönster >>Planar segmentation of data from a laser profile scanner mounted on an industrial robot
    (Engelska)Manuskript (Övrigt vetenskapligt)
    Nationell ämneskategori
    Systemvetenskap, informationssystem och informatik
    Forskningsämne
    Datalogi
    Identifikatorer
    urn:nbn:se:oru:diva-2985 (URN)
    Tillgänglig från: 2008-06-11 Skapad: 2008-06-11 Senast uppdaterad: 2018-01-13Bibliografiskt granskad
  • 331.
    Rahayem, Mohamed
    et al.
    Örebro universitet, Institutionen för teknik.
    Kjellander, Johan
    Planar segmentation of data from a laser profile scanner mounted on an industrial robotManuskript (Övrigt vetenskapligt)
  • 332.
    Rahayem, Mohamed
    et al.
    Örebro universitet, Institutionen för teknik.
    Kjellander, Johan
    Örebro universitet, Institutionen för teknik.
    Larsson, Sören
    Örebro universitet, Institutionen för teknik.
    Accuracy analysis of a 3D measurement system based on a laser profile scanner mounted on an industrial robot with a turntable2007Ingår i: Proceedings of ETFA 12th IEEE conference on Emerging technologies and Factory Automation, 2007, s. 880-883Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    High accuracy 3D laser measurment systems are used in applications like inspection and reverse engineering (RE). With automatic RE in mind, we have designed and built a system that is based on a laser profile scanner mounted on a standard industrial robot with a turntable. This paper is concerned with the relatively complex accuracy issues of such a system. The different parts of the system are analyzed individually and a brief discussion of how they interact is given. Finally a detailed analysis of the scanner head along with experimental results is presented.

  • 333.
    Rahayem, Mohamed
    et al.
    Örebro universitet, Institutionen för teknik.
    Kjellander, Johan
    Örebro universitet, Institutionen för teknik.
    Larsson, Sören
    Örebro universitet, Institutionen för teknik.
    Geometric reverse engineering using a laser profile scanner mounted on an industrial robot2008Ingår i: Proceedings of the 6th international conference of DAAAM Baltic industrial engineering / [ed] Rein Kyttner, 2008, s. 147-152Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Laser scanners in combination with accurate orientation devices are often used in Geometric Reverse Engineering (GRE) to measure point data. The industrial robot as a device for orientation has relatively low accuracy but the advantage of being numerically controlled, fast, and flexible and it is therefore of interest to investigate if it can be used in this application. We have built a measuring system based on a laser profile scanner mounted on an industrial robot. In this paper we present results from practical tests based on point data. We also show how data from laser profiles can be used to increase accuracy in some cases. Finally we propose a new method for plane segmentation using laser profiles.

  • 334. Reikerås, Olav
    et al.
    Johansson, Carina B.
    Örebro universitet, Institutionen för teknik.
    Sundfeldt, Mikael
    Hydroxyapatite and carbon coatings for fixation of unloaded titanium implants2004Ingår i: Journal of long-term effects of medical implants, ISSN 1050-6934, E-ISSN 1940-4379, Vol. 14, nr 6, s. 443-454Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The aim of this study was to investigate the interaction between bone and pure titanium, titanium coated with hydroxyapatite (HA), and titanium coated with carbon in a rat femur model.In 25 rats, the medullary cavity of both femurs was entered by an awl from the trochanteric area. With steel burrs it was successively reamed to a diameter of 2.0 mm. Nails with a diameter of 2.0 mm and with a length of 34 mm were inserted in a random manner; either a pure titanium nail, a titanium nail entirely plasma-sprayed with a 75−100—μm layer of HA or a titanium nail coated with 2−10-μm carbon. The surface roughness of the pure titanium was characterized by Ra 2.6 μm and Rt 22 μm. Ra of HA was 7.5 μm and Rt 52 μm, and of carbon Ra was 0.4 μm and Rt 4.0 μm. Twelve rats were randomized to a follow up of 8 weeks, and the remaining 13 rats were followed for 16 weeks. At sacrifice both femora were dissected free from soft tissues and then immersed in fixative. A specimen slice of about 5 mm thickness was prepared from the subtrochanteric region with a water-cooled band-saw. Sample preparation for un-decalcified tissue followed the internal guidelines at the laboratories of Biomaterials/Handicap Research. At 8 weeks the median bone bonding contact of the implants was 43% (range 0−74) in the titanium group, 39% (0−75) in the HA group, and 3% (0−59) in the carbon group. At 16 weeks the corresponding figures were 58% (0−78) in the titanium group, 51% (15−75) in the HA group, and 8% (0−79) in the carbon group. In conclusion, we found great variability in bone bonding contact. In general, carbon-coated nails had reduced bone bonding contact both at 8 and at 16 weeks as compared to pure titanium or titanium coated with hydroxyapatite.

  • 335. Reikerås, Olav
    et al.
    Johansson, Carina B.
    Örebro universitet, Institutionen för teknik.
    Sundfeldt, Mikael
    Hydroxyapatite enhances long-term fixation of titanium implants2006Ingår i: Journal of long-term effects of medical implants, ISSN 1050-6934, E-ISSN 1940-4379, Vol. 6, nr 2, s. 165-173Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The aim of this study was to evaluate osseous integration of hydroxyapatite coated titanium implants over time as compared to pure titanium. In 20 rats the medullary cavity of both femoral bones was entered by an awl from the trochanteric area. With steel burrs it was successively reamed to a diameter of 2.0 mm. Nails with a diameter of 2.0 mm and with a length of 34 mm were inserted into the medullary cavity; a pure titanium nail on the left side and a titanium nail entirely plasma-sprayed with hydroxyapatite (HA) on the right side. The surface roughness of the pure titanium was characterized by Ra 2.6 μm and Rt 22 μm, and HA had a roughness of Ra 7.5 (arithmetical mean roughness) μm and Rt (maximum profile height) 52 μm. The rats were randomized to a follow-up of 6 and 12 months, respectively. At sacrifice the femoral bones were dissected free from soft tissues. The bones were radiographed and then immersed in fixative. A specimen-slice of about 5 mm thickness was prepared from the region under the trochanter minor with a water cooled band-saw. Sample preparation for undecalcified tissue followed the internal guidelines at the laboratories of Biomaterials/Handicap Research. At 6 months the median bone bonding contact of the implants was 40% (range 0−92) in the titanium group and 34% (0−86) in the HA group. At 12 months the median bone bonding contact was 51% (0−97) in the titanium group and 86% (72−98) in the HA group. In conclusion, we found a significant (p = 0.001) increase in bone bonding contact from 6 to 12 months of the HA coated nails and significantly (p = 0.043) enhanced bone bonding contact in HA coated nails at 12 months as compared to pure titanium nails.

  • 336. Remondini, Denis
    et al.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    A modular, hierarchical, reconfigurable controller for autonomous robots2006Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Behavior-based systems are one of the most popular paradigms for building controllers. Although this paradigm is intrinsically tied to the notion of modular design, most existing behavior-based controllers do not fully satisfy the important desiderata of modularity, hierarchical structure, and reconfigurability. In this paper we discuss these desiderata, and we describe a fuzzy controller that satisfies them. To illustrate the functioning of our controller, we show experiments run on both a simulator and a real robot

  • 337.
    Robertsson, Linn
    Örebro universitet, Institutionen för teknik.
    Perception modeling and feature extraction for an electronic tongue2007Licentiatavhandling, monografi (Övrigt vetenskapligt)
  • 338.
    Robertsson, Linn
    et al.
    Örebro universitet, Institutionen för teknik.
    Iliev, Boyko
    Örebro universitet, Institutionen för teknik.
    Palm, Rainer
    Örebro universitet, Institutionen för teknik.
    Wide, Peter
    Örebro universitet, Institutionen för teknik.
    Perception modeling for human-like artificial sensor systems2007Ingår i: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 65, nr 5, s. 446-459Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we present an approach to the design of human-like artificial systems. It uses a perception model to describe how sensory information is processed for a particular task and to correlate human and artificial perception. Since human-like sensors share their principle of operation with natural systems, their response can be interpreted in an intuitive way. Therefore, such sensors allow for easier and more natural human–machine interaction.

    The approach is demonstrated in two applications. The first is an “electronic tongue”, which performs quality assessment of food and water. In the second application we describe the development of an artificial hand for dexterous manipulation. We show that human-like functionality can be achieved even if the structure of the system is not completely biologically inspired.

  • 339.
    Robertsson, Linn
    et al.
    Örebro universitet, Institutionen för teknik.
    Lindquist, Malin
    Örebro universitet, Institutionen för teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för teknik.
    Iliev, Boyko
    Örebro universitet, Institutionen för teknik.
    Wide, Peter
    Örebro universitet, Institutionen för teknik.
    Human based sensor systems for safety assessment2005Ingår i: Proceedings of the 2005 IEEE International conference on computational intelligence for homeland security and personal safety, 2005. CIHSPS 2005, 2005, s. 137-142Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper focuses on the assumption that sensor system for personal use has optimal performance if coherent with the human perception system. Therefore, we provide arguments for this idea by demonstrating two examples. The first example is a personal taste sensor for use in finding abnormal ingredients in food. The second application is a mobile sniffing system, coherent with the behavior of a biological system when detecting unwanted material in hidden structures, e.g. explosives in a traveling bag

  • 340.
    Rydén, Christer
    Örebro universitet, Institutionen för teknik.
    Projekt TO 3310: inverkan av smörjmedelssammansättning vid tråddragning2002Konferensbidrag (Övrigt vetenskapligt)
  • 341.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Fuzzy logic in autonomous navigation2001Ingår i: Fuzzy logic techniques for autonomous vehicle navigation / [ed] Dimiter Driankov, Alessandro Saffiotti, Heidelberg: Physica Verlag, 2001, s. 3-24Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    The development of techniques for autonomous navigation constitutes one of the major trends in the current research on mobile robotics. In this case study, we discuss how fuzzy computation techniques have be used in the SRI International mobile robot Flakey to address some of the difficult issues posed by autonomous navigation: (i) how to design basic behaviors; (ii) how to coordinate behaviors to execute full navigation plans; and (iii) how to use approximate map information. Our techniques have been validated in both in-house experiments and public events. The use of fuzzy logic has resulted in smooth motion control, robust performance in face of errors in the prior knowledge and in the sensor data, and principled integration between different layers of control

  • 342.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för teknik.
    Handling uncertainty in control of autonomous robots1999Ingår i: Artificial intelligence today: recent trends and developments / [ed] Michael J. Wooldridge, Manuela Veloso, Berlin: Springer Berlin/Heidelberg, 1999, Vol. 1600, s. 381-407Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Autonomous robots need the ability to move purposefully and without human intervention in real-world environments that have not been specifically engineered for them. These environments are characterized by the pervasive presence of uncertainty: the need to cope with this uncertainty constitutes a major challenge for autonomous robots. In this note, we discuss this challenge, and present some specific solutions based on our experience on the use of fuzzy logic in mobile robots. We focus on three issues: how to realize robust motion control; how to flexibly execute navigation plans; and how to approximately estimate the robot's location.

  • 343.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Broxvall, Mathias
    Örebro universitet, Institutionen för teknik.
    PEIS ecologies: ambient intelligence meets autonomous robotics2005Ingår i: Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, 2005, s. 275-280Konferensbidrag (Refereegranskat)
    Abstract [en]

    A common vision in the field of autonomous robotics is to create a skilled robot companion that is able to live in our homes and to perform physical tasks to help us in our everyday life. Another vision, coming from the field of ambient intelligence, is to create a network of intelligent devices that provides us with information, communication, and entertainment. We propose to combine these two visions into the new concept of an ecology of networked Physically Embedded Intelligent Systems (PEIS). In this paper, we define this concept, discuss ways to implement it, and illustrate it on a simple example involving some real robotic devices.

  • 344.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Broxvall, Mathias
    Örebro universitet, Institutionen för teknik.
    Seo, Beom-Su
    Cho, Young-Jo
    Steps toward an ecology of physically embedded intelligent systems2006Konferensbidrag (Refereegranskat)
    Abstract [en]

    The concept of Ecology of Physically Embedded Intelligent Systems, or PEIS-Ecology, combines insights from the fields of ubiquitous robotics and ambient intelligence to provide a new solution to building intelligent robots in the service of people. While this concept provides great potential, it also presents a number of new scientific challenges. In this paper we introduce this concept, discuss its potential and its challenges, and present our current steps toward its realization. We also point to experimental results that show the viability of this concept. The discussion in this paper is also relevant to any type of ubiquitous robot or network robotic system

  • 345.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Broxvall, Mathias
    Örebro universitet, Institutionen för teknik.
    Seo, Beom-Su
    Örebro universitet, Institutionen för teknik.
    Cho, Young-Jo
    Örebro universitet, Institutionen för teknik.
    The PEIS-ecology project: a progress report2007Ingår i: Proceedings of the ICRA-07 Workshop on Network Robot Systems. Roma, Italy, 2007, 2007, s. 16-22Konferensbidrag (Refereegranskat)
    Abstract [en]

    The concept of Ecology of Physically Embedded Intelligent Systems, or PEIS-Ecology, combines insights from the fields of ubiquitous robotics and ambient intelligence to provide a new solution to building intelligent robots in the service of people. While this concept provides great potential, it also presents a number of new scientific challenges. The PEIS-Ecology project is an ongoing collaborative project between Swedish and Korean researchers which addresses these challenges. In this paper we introduce the concept of PEIS-Ecology, discuss its potential and its challenges, and present our current steps toward its realization. We also point to experimental results that show the viability of this concept.

  • 346.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Broxvall, Mathias
    Örebro universitet, Institutionen för teknik.
    Seo, Beom-Su
    Cho, Young-Jo
    The PEIS-ecology project: a progress report2007Konferensbidrag (Refereegranskat)
    Abstract [en]

    The concept of Ecology of Physically Embedded Intelligent

    Systems, or Peis-Ecology, combines insights from the fields of ubiquitous robotics and ambient intelligence to provide a new solution to building intelligent robots in the service of people. While this concept provides great potential, it also presents a number of new scientific challenges.

    The Peis-Ecology project is an ongoing collaborative pro ject between Swedish and Korean researchers which addresses these challenges. In this paper we introduce the concept of Peis-Ecology, discuss its potential and its challenges, and present our current steps toward its realization. We also point to experimental results that show the viability of this concept.

  • 347.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Driankov, Dimiter
    Örebro universitet, Institutionen för teknik.
    Duckett, Tom
    Örebro universitet, Institutionen för teknik.
    A system for vision based human-robot interaction2004Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe our initial steps toward the realization of a robotic system for assisting fire-fighting and rescue services. The system implements the concept of shared autonomy between the robot and the human operator: the mobile robot performs local navigation, sensing and mapping, while the operator interprets the sensor data and provides strategic navigation goals.

  • 348.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    LeBlanc, Kevin
    Örebro universitet, Institutionen för teknik.
    Active perceptual anchoring of robot behavior in a dynamic environment2000Ingår i: IEEE international conference on robotics and automation, ICRA '00: proceedings, 2000, s. 3796-3802Konferensbidrag (Refereegranskat)
    Abstract [en]

    Perceptual anchoring is the process of linking action to the appropriate objects in the environment via perception. The pivot of anchoring is the inclusion of micro-models of the world, or anchors, into a controller. In this paper, we propose to use anchors to focus the perceptual effort according to the current needs of the controller. We describe an active gaze control strategy able to maintain anchoring of several objects in a dynamic environment, and show how we have used it in a team of legged robots in the RoboCup'99 international robot soccer competition

  • 349.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Ruspini, Enrique H.
    Global team coordination by local computation2001Konferensbidrag (Refereegranskat)
    Abstract [en]

    Desirability functions are an effective way to define group and individual objectives of a team of cooperating mobile robots. By combining desirability functions, we can identify the individual actions that best satisfy both sets of objectives. Combination, however, is global, posing high demands in terms of communication and computation resources. In this paper, we investigate a technique to perform this combination using local computations. Simulated experiments suggest that, under conditions of spatial locality, team control by local computation achieves the same performance than using a global technique

  • 350.
    Saffiotti, Alessandro
    et al.
    Örebro universitet, Institutionen för teknik.
    Ruspini, Enrique H.
    Konolige, Kurt
    Using fuzzy logic for mobile robot control1999Ingår i: Practical applications of fuzzy technologies / [ed] Hans-Jürgen Zimmermann, Kluwer Academic, MA , 1999, s. 185-205Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    The development of techniques for autonomous operation in real-world, unstructured environments constitutes one of the major trends in the current research on mobile robotics. In spite of recent advances, a number of fundamental difficulties remain. In this chapter, we discuss how fuzzy logic techniques can be used to address some of these difficulties. To illustrate the discussion, we describe the fuzzy-logic solutions developed on Flakey, the mobile robot of SRI International

456789 301 - 350 av 447
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf