To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Saffiotti, Alessandro, ProfessorORCID iD iconorcid.org/0000-0001-8229-1363
Alternative names
Biography [eng]

My research interests encompass Artificial Intelligence (AI), autonomous robotics, and technology for elderly care.  I have been active for more than 25 years in the integration of AI and Robotics into a "cognitive robots" - you may say: how to give a brain to a body, or a body to a brain!  I also organize a number of international activities on combining AI and Robotics, including the "Lucia" series of PhD schools. I enjoy collaborative work, and I have participated in 12 EU projects, several EU networks, and many national projects. I am in the editorial board of the Artificial Intelligence journal, and of the International Journal on Social Robotics. I am a member of AAAI, a senior member of IEEE, and an EurAI fellow.

Publications (10 of 204) Show all publications
Faridghasemnia, M., Renoux, J. & Saffiotti, A. (2024). Visual Noun Modifiers: The Problem of Binding Visual and Linguistic Cues. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation, ICRA 2024, Yokohama, May 13-17, 2024 (pp. 11178-11185). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Visual Noun Modifiers: The Problem of Binding Visual and Linguistic Cues
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 11178-11185Conference paper, Published paper (Refereed)
Abstract [en]

In many robotic applications, especially those involving humans and the environment, linguistic and visual information must be processed jointly and bound together. Existing works either encode the image or the language into a subsymbolic space, like the CLIP model, or create a symbolic space of extracted information, like the object detection models. In this paper, we propose to describe images by nouns and modifiers and introduce a new embedded binding space where the linguistic and visual cues can effectively be bound. We investigate how state-of-the-art models perform in recognizing nouns and modifiers from images, and propose our method by introducing a dataset and CLIP-like recognition techniques based on transfer learning and metric learning. We show real-world experiments that demonstrate the practical applicability of our approach to robotics applications. Our results indicate that our method can surpass the state-of-the-art in recognizing nouns and modifiers from images. Interestingly, our method exhibits a language characteristic related to context sensitivity.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2024
Keywords
Adversarial machine learning, Contrastive Learning, Image coding, Object detection, Object recognition, Robot learning, Transfer learning, Visual languages, ART model, Detection models, Environment information, Linguistic information, Objects detection, Robotics applications, State of the art, Sub-symbolic, Visual cues, Visual information, Linguistics
National Category
Robotics and automation
Identifiers
urn:nbn:se:oru:diva-118593 (URN)10.1109/ICRA57147.2024.10611332 (DOI)2-s2.0-85202445079 (Scopus ID)9798350384574 (ISBN)
Conference
IEEE International Conference on Robotics and Automation, ICRA 2024, Yokohama, May 13-17, 2024
Available from: 2025-01-16 Created: 2025-01-16 Last updated: 2025-01-16Bibliographically approved
De Filippo, A., Milano, M., Presutti, V. & Saffiotti, A. (2023). CREAI 2023: Preface to the Second Workshop on Artificial Intelligence and Creativity. In: CEUR Workshop Proceedings: . Paper presented at 2nd Workshop on Artificial Intelligence and Creativity, CREAI 2023, Roma, Italy, 6 November, 2023.. CEUR-WS, Article ID 193706.
Open this publication in new window or tab >>CREAI 2023: Preface to the Second Workshop on Artificial Intelligence and Creativity
2023 (English)In: CEUR Workshop Proceedings, CEUR-WS , 2023, article id 193706Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, Artificial Intelligence (AI) has gained increasing popularity in the area of art creation, by demonstrating its great potential. Research in this topic has developed AI systems able to generate creative outputs in fields such as music, painting, games, design and scientific discovery, either autonomously or in collaboration with humans. Therefore, AI also helped to analyze and study the mechanisms of creativity from a broader perspective: from the socio-anthropological to psychological, as well as cognitive impact of the autonomous creative processes of artificial intelligence. These advances are leading to new opportunities research perspectives, while also posing challenging questions related to authorship, integrity, bias and evaluation of AI artistic outputs. CREAI, the workshop on AI and creativity, tries to address these research lines and aims to provide a forum for the AI community to discuss problems, challenges and innovative approaches in the various sub-fields of AI and creativity.

Place, publisher, year, edition, pages
CEUR-WS, 2023
Keywords
Artificial intelligence systems, Creative process, Creatives, Design discoveries, Game design, In-field, Innovative approaches, Intelligence communities, Scientific discovery, Sub fields, Artificial intelligence
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:oru:diva-118334 (URN)2-s2.0-85176606442 (Scopus ID)
Conference
2nd Workshop on Artificial Intelligence and Creativity, CREAI 2023, Roma, Italy, 6 November, 2023.
Available from: 2025-01-13 Created: 2025-01-13 Last updated: 2025-01-15Bibliographically approved
Gugliermo, S., Schaffernicht, E., Koniaris, C. & Saffiotti, A. (2023). Extracting Planning Domains from Execution Traces: a Progress Report. In: : . Paper presented at ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023.
Open this publication in new window or tab >>Extracting Planning Domains from Execution Traces: a Progress Report
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

One of the difficulties of using AI planners in industrial applications pertains to the complexity of writing planning domain models. These models are typically constructed by domain planning experts and can become increasingly difficult to codify for large applications. In this paper, we describe our ongoing research on a novel approach to automatically learn planning domains from previously executed traces using Behavior Trees as an intermediate human-readable structure. By involving human planning experts in the learning phase, our approach can benefit from their validation. This paper outlines the initial steps we have taken in this research, and presents the challenges we face in the future.

National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-110796 (URN)
Conference
ICAPS 2023, Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2023), Prague, Czech Republic, July 9-10, 2023
Funder
Swedish Foundation for Strategic Research
Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-06-03Bibliographically approved
Lamanna, L., Faridghasemnia, M., Gerevini, A., Saetti, A., Saffiotti, A., Serafini, L. & Traverso, P. (2023). Learning to Act for Perceiving in Partially Unknown Environments. In: Edith Elkind (Ed.), Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023): . Paper presented at 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Macao, S.A.R., August 19-25, 2023 (pp. 5485-5493). International Joint Conferences on Artificial Intelligence
Open this publication in new window or tab >>Learning to Act for Perceiving in Partially Unknown Environments
Show others...
2023 (English)In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI 2023) / [ed] Edith Elkind, International Joint Conferences on Artificial Intelligence , 2023, p. 5485-5493Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous agents embedded in a physical environment need the ability to correctly perceive the state of the environment from sensory data. In partially observable environments, certain properties can be perceived only in specific situations and from certain viewpoints that can be reached by the agent by planning and executing actions. For instance, to understand whether a cup is full of coffee, an agent, equipped with a camera, needs to turn on the light and look at the cup from the top. When the proper situations to perceive the desired properties are unknown, an agent needs to learn them and plan to get in such situations. In this paper, we devise a general method to solve this problem by evaluating the confidence of a neural network online and by using symbolic planning. We experimentally evaluate the proposed approach on several synthetic datasets, and show the feasibility of our approach in a real-world scenario that involves noisy perceptions and noisy actions on a real robot.

Place, publisher, year, edition, pages
International Joint Conferences on Artificial Intelligence, 2023
Series
IJCAI International Joint Conference on Artificial Intelligence, ISSN 1045-0823
Keywords
Artificial intelligence, General method, Learn+, Neural-networks, Partially observable environments, Physical environments, Property, Real-world scenario, Sensory data, Synthetic datasets, Unknown environments, Autonomous agents
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-112138 (URN)10.24963/ijcai.2023/609 (DOI)001202344205065 ()2-s2.0-85170365795 (Scopus ID)9781956792034 (ISBN)
Conference
32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Macao, S.A.R., August 19-25, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg FoundationEU, Horizon 2020, 101016442
Note

We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. This work has also been partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and AIPlan4EU funded by the EU Horizon 2020 research and innovation program under GA n. 101016442.

Available from: 2024-03-06 Created: 2024-03-06 Last updated: 2024-08-13Bibliographically approved
Köckemann, U., Calisi, D., Gemignani, G., Renoux, J. & Saffiotti, A. (2023). Planning for Automated Testing of Implicit Constraints in Behavior Trees. In: Sven Koenig; Roni Stern; Mauro Vallati (Ed.), Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling: . Paper presented at 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), Prague, Czech Republic, July 8-13, 2023 (pp. 649-658). AAAI Press, 33
Open this publication in new window or tab >>Planning for Automated Testing of Implicit Constraints in Behavior Trees
Show others...
2023 (English)In: Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling / [ed] Sven Koenig; Roni Stern; Mauro Vallati, AAAI Press , 2023, Vol. 33, p. 649-658Conference paper, Published paper (Refereed)
Abstract [en]

Behavior Trees (BTs) are a formalism increasingly used to control the execution of robotic systems. The strength of BTs resides in their compact, hierarchical and transparent representation. However, when used in practical applications transparency is often hindered by the introduction of implicit run-time relations between nodes, e.g., because of data dependencies or hardware-related ordering constraints. Manually verifying the correctness of a BT with respect to these hidden relations is a tedious and error-prone task. This paper presents a modular planning-based approach for automatically testing BTs offline at design time, to identify possible executions that may violate given data and ordering constraints and to exhibit traces of these executions to help debugging. Our approach supports both basic and advanced BT node types, e.g., supporting parallel behaviors, and can be extended with other node types as needed. We evaluate our approach on BTs used in a commercially deployed robotics system and on a large set of randomly generated trees showing that our approach scales to realistic sizes of more than 3000 nodes. 

Place, publisher, year, edition, pages
AAAI Press, 2023
Series
Proceedings of the ... International Conference on Automated Planning and Scheduling, ISSN 2334-0835, E-ISSN 2334-0843 ; 33
Keywords
Automated Planning, Robotics, Behavior Trees
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-112201 (URN)10.1609/icaps.v33i1.27247 (DOI)2-s2.0-85169788442 (Scopus ID)
Conference
33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), Prague, Czech Republic, July 8-13, 2023
Projects
AIPlan4EU
Funder
European Commission, 101016442
Available from: 2024-03-07 Created: 2024-03-07 Last updated: 2024-06-03Bibliographically approved
Lamanna, L., Serafini, L., Faridghasemnia, M., Saffiotti, A., Saetti, A., Gerevini, A. & Traverso, P. (2023). Planning for Learning Object Properties. In: Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 37 No. 10: AAAI-23 Technical Tracks 10. Paper presented at 37th AAAI Conference on Artificial Intelligence, Washington, D.C., USA, February 7-14, 2023 (pp. 12005-12013). AAAI Press, 37:10
Open this publication in new window or tab >>Planning for Learning Object Properties
Show others...
2023 (English)In: Proceedings of the AAAI Conference on Artificial Intelligence: Vol. 37 No. 10: AAAI-23 Technical Tracks 10, AAAI Press , 2023, Vol. 37:10, p. 12005-12013Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous agents embedded in a physical environment need the ability to recognize objects and their properties from sensory data. Such a perceptual ability is often implemented by supervised machine learning models, which are pre-trained using a set of labelled data. In real-world, open-ended deployments, however, it is unrealistic to assume to have a pre-trained model for all possible environments. Therefore, agents need to dynamically learn/adapt/extend their perceptual abilities online, in an autonomous way, by exploring and interacting with the environment where they operate. This paper describes a way to do so, by exploiting symbolic planning. Specifically, we formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem (using PDDL). We use planning techniques to produce a strategy for automating the training dataset creation and the learning process. Finally, we provide an experimental evaluation in both a simulated and a real environment, which shows that the proposed approach is able to successfully learn how to recognize new object properties.

Place, publisher, year, edition, pages
AAAI Press, 2023
Series
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; Vol. 37 No. 10
Keywords
Learning systems, Supervised learning, Labeled data, Learn+, Learning objects, Machine learning models, Object property, Physical environments, Property, Real-world, Sensory data, Supervised machine learning, Autonomous agents
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-112139 (URN)10.1609/aaai.v37i10.26416 (DOI)001243749200056 ()2-s2.0-85165143019 (Scopus ID)9781577358800 (ISBN)
Conference
37th AAAI Conference on Artificial Intelligence, Washington, D.C., USA, February 7-14, 2023
Funder
EU, Horizon 2020, 101016442; 952215Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Note

This work has been partially supported by AI-Plan4EU and TAILOR, two projects funded by the EU Horizon 2020 research and innovation program under GA n. 101016442 and n. 952215, respectively, and by MUR PRIN-2020 project RIPER (n. 20203FFYLK). We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. This work has also been partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2024-03-06 Created: 2024-03-06 Last updated: 2024-08-21Bibliographically approved
Buyukgoz, S., Grosinger, J., Chetouani, M. & Saffiotti, A. (2022). Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures. Frontiers in Robotics and AI, 9, Article ID 929267.
Open this publication in new window or tab >>Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures
2022 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 929267Article in journal (Refereed) Published
Abstract [en]

Robots sharing their space with humans need to be proactive to be helpful. Proactive robots can act on their own initiatives in an anticipatory way to benefit humans. In this work, we investigate two ways to make robots proactive. One way is to recognize human intentions and to act to fulfill them, like opening the door that you are about to cross. The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them, like recommending you to take an umbrella since rain has been forecast. In this article, we present approaches to realize these two types of proactive behavior. We then present an integrated system that can generate proactive robot behavior by reasoning on both factors: intentions and predictions. We illustrate our system on a sample use case including a domestic robot and a human. We first run this use case with the two separate proactive systems, intention-based and prediction-based, and then run it with our integrated system. The results show that the integrated system is able to consider a broader variety of aspects that are required for proactivity.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2022
Keywords
Autonomous robots, human intentions, human-centered AI, human–robot interaction, proactive agents, social robot
National Category
Robotics
Identifiers
urn:nbn:se:oru:diva-101051 (URN)10.3389/frobt.2022.929267 (DOI)000848417400001 ()36045640 (PubMedID)2-s2.0-85136846004 (Scopus ID)
Funder
European Commission, 765955 952026
Available from: 2022-09-02 Created: 2022-09-02 Last updated: 2022-09-13Bibliographically approved
Bontempi, G., Chavarriaga, R., eD Canck, H., Girardi, E., Hoos, H., Kilbane-Dawe, I., . . . Maratea, M. (2021). The CLAIRE COVID-19 initiative: approach, experiences and recommendations. Ethics and Information Technology, 23(Suppl. 1), 127-133
Open this publication in new window or tab >>The CLAIRE COVID-19 initiative: approach, experiences and recommendations
Show others...
2021 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 23, no Suppl. 1, p. 127-133Article in journal (Refereed) Published
Abstract [en]

A volunteer effort by Artificial Intelligence (AI) researchers has shown it can deliver significant research outcomes rapidly to help tackle COVID-19. Within two months, CLAIRE's self-organising volunteers delivered the World's first comprehensive curated repository of COVID-19-related datasets useful for drug-repurposing, drafted review papers on the role CT/X-ray scan analysis and robotics could play, and progressed research in other areas. Given the pace required and nature of voluntary efforts, the teams faced a number of challenges. These offer insights in how better to prepare for future volunteer scientific efforts and large scale, data-dependent AI collaborations in general. We offer seven recommendations on how to best leverage such efforts and collaborations in the context of managing future crises.

Place, publisher, year, edition, pages
Springer, 2021
Keywords
Artificial intelligence, COVID-19, Emergency response
National Category
Software Engineering
Identifiers
urn:nbn:se:oru:diva-89619 (URN)10.1007/s10676-020-09567-7 (DOI)000616464600001 ()33584129 (PubMedID)2-s2.0-85101426290 (Scopus ID)
Available from: 2021-02-16 Created: 2021-02-16 Last updated: 2023-12-08Bibliographically approved
Thörn, O., Knudsen, P. & Saffiotti, A. (2020). Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance. In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): . Paper presented at 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020 (pp. 845-850). IEEE
Open this publication in new window or tab >>Human-Robot Artistic Co-Creation: a Study in Improvised Robot Dance
2020 (English)In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE , 2020, p. 845-850Conference paper, Published paper (Refereed)
Abstract [en]

Joint artistic performance, like music, dance or acting, provides an excellent domain to observe the mechanisms of human-human collaboration. In this paper, we use this domain to study human-robot collaboration and co-creation. We propose a general model in which an AI system mediates the interaction between a human performer and a robotic performer. We then instantiate this model in a case study, implemented using fuzzy logic techniques, in which a human pianist performs jazz improvisations, and a robot dancer performs classical dancing patterns in harmony with the artistic moods expressed by the human. The resulting system has been evaluated in an extensive user study, and successfully demonstrated in public live performances.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:oru:diva-88686 (URN)10.1109/RO-MAN47096.2020.9223446 (DOI)000598571700122 ()2-s2.0-85090918508 (Scopus ID)978-1-7281-6075-7 (ISBN)
Conference
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Virtual, Naples, Italy, August 31 - September 4, 2020
Funder
EU, Horizon 2020, 825619
Available from: 2021-01-20 Created: 2021-01-20 Last updated: 2021-01-20Bibliographically approved
Tomic, S., Pecora, F. & Saffiotti, A. (2020). Learning Normative Behaviors through Abstraction. In: Giuseppe De Giacomo; Alejandro Catala; Bistra Dilkina; Michela Milano; Senén Barro; Alberto Bugarín; Jérôme Lang (Ed.), ECAI 2020: . Paper presented at 24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostela, Spain, August 29 - September 8, 2020 (pp. 1547-1554). IOS Press, 325
Open this publication in new window or tab >>Learning Normative Behaviors through Abstraction
2020 (English)In: ECAI 2020 / [ed] Giuseppe De Giacomo; Alejandro Catala; Bistra Dilkina; Michela Milano; Senén Barro; Alberto Bugarín; Jérôme Lang, IOS Press, 2020, Vol. 325, p. 1547-1554Conference paper, Published paper (Refereed)
Abstract [en]

Future robots should follow human social norms to be useful and accepted in human society. In this paper, we show how prior knowledge about social norms, represented using an existing normative framework, can be used to (1) guide reinforcement learning agents towards normative policies, and (2) re-use (transfer) learned policies in novel domains. The proposed method is not dependent on a particular reinforcement learning algorithm and can be seen as a means to learn abstract procedural knowledge based on declarative domain-independent semantic specifications.

Place, publisher, year, edition, pages
IOS Press, 2020
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 325
National Category
Computer Sciences
Identifiers
urn:nbn:se:oru:diva-90586 (URN)10.3233/FAIA200263 (DOI)000650971301101 ()2-s2.0-85091786020 (Scopus ID)978-1-64368-100-9 (ISBN)978-1-64368-101-6 (ISBN)
Conference
24th European Conference on Artificial Intelligence (ECAI 2020), Santiago de Compostela, Spain, August 29 - September 8, 2020
Funder
EU, Horizon 2020, 825619 ”AI4EU”
Available from: 2021-03-19 Created: 2021-03-19 Last updated: 2021-06-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8229-1363

Search in DiVA

Show all publications